00:00:00.000 Started by upstream project "autotest-per-patch" build number 132858 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.037 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.038 The recommended git tool is: git 00:00:00.038 using credential 00000000-0000-0000-0000-000000000002 00:00:00.042 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.058 Fetching changes from the remote Git repository 00:00:00.060 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.091 Using shallow fetch with depth 1 00:00:00.091 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.091 > git --version # timeout=10 00:00:00.133 > git --version # 'git version 2.39.2' 00:00:00.133 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.171 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.171 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.165 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.174 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.184 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.184 > git config core.sparsecheckout # timeout=10 00:00:04.193 > git read-tree -mu HEAD # timeout=10 00:00:04.208 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.232 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.232 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.326 [Pipeline] Start of Pipeline 00:00:04.337 [Pipeline] library 00:00:04.339 Loading library shm_lib@master 00:00:04.339 Library shm_lib@master is cached. Copying from home. 00:00:04.353 [Pipeline] node 00:00:04.363 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.365 [Pipeline] { 00:00:04.373 [Pipeline] catchError 00:00:04.374 [Pipeline] { 00:00:04.382 [Pipeline] wrap 00:00:04.386 [Pipeline] { 00:00:04.391 [Pipeline] stage 00:00:04.392 [Pipeline] { (Prologue) 00:00:04.546 [Pipeline] sh 00:00:04.822 + logger -p user.info -t JENKINS-CI 00:00:04.838 [Pipeline] echo 00:00:04.839 Node: WFP4 00:00:04.847 [Pipeline] sh 00:00:05.139 [Pipeline] setCustomBuildProperty 00:00:05.147 [Pipeline] echo 00:00:05.148 Cleanup processes 00:00:05.153 [Pipeline] sh 00:00:05.427 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.427 3051571 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.439 [Pipeline] sh 00:00:05.718 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.718 ++ grep -v 'sudo pgrep' 00:00:05.718 ++ awk '{print $1}' 00:00:05.718 + sudo kill -9 00:00:05.718 + true 00:00:05.729 [Pipeline] cleanWs 00:00:05.735 [WS-CLEANUP] Deleting project workspace... 00:00:05.736 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.740 [WS-CLEANUP] done 00:00:05.743 [Pipeline] setCustomBuildProperty 00:00:05.753 [Pipeline] sh 00:00:06.025 + sudo git config --global --replace-all safe.directory '*' 00:00:06.112 [Pipeline] httpRequest 00:00:06.896 [Pipeline] echo 00:00:06.897 Sorcerer 10.211.164.20 is alive 00:00:06.905 [Pipeline] retry 00:00:06.907 [Pipeline] { 00:00:06.919 [Pipeline] httpRequest 00:00:06.923 HttpMethod: GET 00:00:06.923 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.923 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.925 Response Code: HTTP/1.1 200 OK 00:00:06.926 Success: Status code 200 is in the accepted range: 200,404 00:00:06.926 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.058 [Pipeline] } 00:00:08.070 [Pipeline] // retry 00:00:08.075 [Pipeline] sh 00:00:08.351 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.366 [Pipeline] httpRequest 00:00:08.754 [Pipeline] echo 00:00:08.755 Sorcerer 10.211.164.20 is alive 00:00:08.764 [Pipeline] retry 00:00:08.766 [Pipeline] { 00:00:08.780 [Pipeline] httpRequest 00:00:08.784 HttpMethod: GET 00:00:08.785 URL: http://10.211.164.20/packages/spdk_575641720c288b2c640c5d52a9691dd12c5f86d3.tar.gz 00:00:08.785 Sending request to url: http://10.211.164.20/packages/spdk_575641720c288b2c640c5d52a9691dd12c5f86d3.tar.gz 00:00:08.805 Response Code: HTTP/1.1 200 OK 00:00:08.806 Success: Status code 200 is in the accepted range: 200,404 00:00:08.806 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_575641720c288b2c640c5d52a9691dd12c5f86d3.tar.gz 00:00:48.518 [Pipeline] } 00:00:48.536 [Pipeline] // retry 00:00:48.543 [Pipeline] sh 00:00:48.825 + tar --no-same-owner -xf spdk_575641720c288b2c640c5d52a9691dd12c5f86d3.tar.gz 00:00:51.364 [Pipeline] sh 00:00:51.646 + git -C spdk log --oneline -n5 00:00:51.646 575641720 lib/trace:fix encoding format in trace_register_description 00:00:51.646 92d1e663a bdev/nvme: Fix depopulating a namespace twice 00:00:51.646 52a413487 bdev: do not retry nomem I/Os during aborting them 00:00:51.646 d13942918 bdev: simplify bdev_reset_freeze_channel 00:00:51.646 0edc184ec accel/mlx5: Support mkey registration 00:00:51.656 [Pipeline] } 00:00:51.669 [Pipeline] // stage 00:00:51.677 [Pipeline] stage 00:00:51.679 [Pipeline] { (Prepare) 00:00:51.694 [Pipeline] writeFile 00:00:51.709 [Pipeline] sh 00:00:51.989 + logger -p user.info -t JENKINS-CI 00:00:52.000 [Pipeline] sh 00:00:52.282 + logger -p user.info -t JENKINS-CI 00:00:52.293 [Pipeline] sh 00:00:52.573 + cat autorun-spdk.conf 00:00:52.573 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.573 SPDK_TEST_NVMF=1 00:00:52.573 SPDK_TEST_NVME_CLI=1 00:00:52.573 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.574 SPDK_TEST_NVMF_NICS=e810 00:00:52.574 SPDK_TEST_VFIOUSER=1 00:00:52.574 SPDK_RUN_UBSAN=1 00:00:52.574 NET_TYPE=phy 00:00:52.580 RUN_NIGHTLY=0 00:00:52.584 [Pipeline] readFile 00:00:52.608 [Pipeline] withEnv 00:00:52.610 [Pipeline] { 00:00:52.622 [Pipeline] sh 00:00:52.904 + set -ex 00:00:52.905 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:52.905 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:52.905 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.905 ++ SPDK_TEST_NVMF=1 00:00:52.905 ++ SPDK_TEST_NVME_CLI=1 00:00:52.905 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.905 ++ SPDK_TEST_NVMF_NICS=e810 00:00:52.905 ++ SPDK_TEST_VFIOUSER=1 00:00:52.905 ++ SPDK_RUN_UBSAN=1 00:00:52.905 ++ NET_TYPE=phy 00:00:52.905 ++ RUN_NIGHTLY=0 00:00:52.905 + case $SPDK_TEST_NVMF_NICS in 00:00:52.905 + DRIVERS=ice 00:00:52.905 + [[ tcp == \r\d\m\a ]] 00:00:52.905 + [[ -n ice ]] 00:00:52.905 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:52.905 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:52.905 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:52.905 rmmod: ERROR: Module i40iw is not currently loaded 00:00:52.905 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:52.905 + true 00:00:52.905 + for D in $DRIVERS 00:00:52.905 + sudo modprobe ice 00:00:52.905 + exit 0 00:00:52.940 [Pipeline] } 00:00:52.972 [Pipeline] // withEnv 00:00:52.975 [Pipeline] } 00:00:52.982 [Pipeline] // stage 00:00:52.987 [Pipeline] catchError 00:00:52.987 [Pipeline] { 00:00:52.994 [Pipeline] timeout 00:00:52.994 Timeout set to expire in 1 hr 0 min 00:00:52.995 [Pipeline] { 00:00:53.002 [Pipeline] stage 00:00:53.003 [Pipeline] { (Tests) 00:00:53.009 [Pipeline] sh 00:00:53.285 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:53.285 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:53.285 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:53.285 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:53.285 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:53.285 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:53.285 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:53.285 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:53.285 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:53.285 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:53.285 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:53.285 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:53.285 + source /etc/os-release 00:00:53.285 ++ NAME='Fedora Linux' 00:00:53.285 ++ VERSION='39 (Cloud Edition)' 00:00:53.285 ++ ID=fedora 00:00:53.285 ++ VERSION_ID=39 00:00:53.285 ++ VERSION_CODENAME= 00:00:53.285 ++ PLATFORM_ID=platform:f39 00:00:53.285 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:53.285 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:53.285 ++ LOGO=fedora-logo-icon 00:00:53.285 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:53.285 ++ HOME_URL=https://fedoraproject.org/ 00:00:53.285 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:53.285 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:53.285 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:53.285 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:53.285 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:53.285 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:53.285 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:53.285 ++ SUPPORT_END=2024-11-12 00:00:53.285 ++ VARIANT='Cloud Edition' 00:00:53.285 ++ VARIANT_ID=cloud 00:00:53.285 + uname -a 00:00:53.285 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:00:53.285 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:55.817 Hugepages 00:00:55.817 node hugesize free / total 00:00:55.817 node0 1048576kB 0 / 0 00:00:55.817 node0 2048kB 0 / 0 00:00:55.817 node1 1048576kB 0 / 0 00:00:55.817 node1 2048kB 0 / 0 00:00:55.817 00:00:55.817 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:55.817 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:55.817 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:55.817 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:55.817 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:55.817 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:55.817 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:55.817 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:55.817 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:55.817 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:55.817 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:55.817 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:55.817 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:55.817 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:55.817 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:55.817 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:55.817 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:55.817 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:55.817 + rm -f /tmp/spdk-ld-path 00:00:55.817 + source autorun-spdk.conf 00:00:55.817 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.817 ++ SPDK_TEST_NVMF=1 00:00:55.817 ++ SPDK_TEST_NVME_CLI=1 00:00:55.817 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.817 ++ SPDK_TEST_NVMF_NICS=e810 00:00:55.817 ++ SPDK_TEST_VFIOUSER=1 00:00:55.817 ++ SPDK_RUN_UBSAN=1 00:00:55.817 ++ NET_TYPE=phy 00:00:55.817 ++ RUN_NIGHTLY=0 00:00:55.817 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:55.817 + [[ -n '' ]] 00:00:55.817 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:55.817 + for M in /var/spdk/build-*-manifest.txt 00:00:55.817 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:55.817 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.817 + for M in /var/spdk/build-*-manifest.txt 00:00:55.817 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:55.817 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.817 + for M in /var/spdk/build-*-manifest.txt 00:00:55.817 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:55.817 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.817 ++ uname 00:00:55.817 + [[ Linux == \L\i\n\u\x ]] 00:00:55.817 + sudo dmesg -T 00:00:55.817 + sudo dmesg --clear 00:00:56.076 + dmesg_pid=3052620 00:00:56.076 + [[ Fedora Linux == FreeBSD ]] 00:00:56.076 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:56.076 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:56.076 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:56.076 + [[ -x /usr/src/fio-static/fio ]] 00:00:56.076 + export FIO_BIN=/usr/src/fio-static/fio 00:00:56.076 + FIO_BIN=/usr/src/fio-static/fio 00:00:56.076 + sudo dmesg -Tw 00:00:56.076 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:56.076 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:56.076 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:56.076 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:56.076 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:56.076 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:56.076 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:56.076 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:56.076 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.076 09:12:08 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:00:56.076 09:12:08 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.076 09:12:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.076 09:12:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:56.076 09:12:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:56.076 09:12:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.076 09:12:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:56.076 09:12:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:00:56.076 09:12:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:56.076 09:12:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:56.076 09:12:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:00:56.076 09:12:08 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:56.076 09:12:08 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.076 09:12:08 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:00:56.076 09:12:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:56.076 09:12:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:56.076 09:12:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:56.076 09:12:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:56.076 09:12:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:56.076 09:12:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.076 09:12:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.076 09:12:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.076 09:12:08 -- paths/export.sh@5 -- $ export PATH 00:00:56.076 09:12:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:56.076 09:12:08 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:56.076 09:12:08 -- common/autobuild_common.sh@493 -- $ date +%s 00:00:56.076 09:12:08 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734077528.XXXXXX 00:00:56.076 09:12:08 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734077528.nidcQy 00:00:56.076 09:12:08 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:00:56.076 09:12:08 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:00:56.076 09:12:08 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:56.076 09:12:08 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:56.076 09:12:08 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:56.076 09:12:08 -- common/autobuild_common.sh@509 -- $ get_config_params 00:00:56.076 09:12:08 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:56.076 09:12:08 -- common/autotest_common.sh@10 -- $ set +x 00:00:56.076 09:12:08 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:56.076 09:12:08 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:00:56.076 09:12:08 -- pm/common@17 -- $ local monitor 00:00:56.076 09:12:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.076 09:12:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.076 09:12:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.076 09:12:08 -- pm/common@21 -- $ date +%s 00:00:56.076 09:12:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:56.076 09:12:08 -- pm/common@21 -- $ date +%s 00:00:56.076 09:12:08 -- pm/common@25 -- $ sleep 1 00:00:56.076 09:12:08 -- pm/common@21 -- $ date +%s 00:00:56.076 09:12:08 -- pm/common@21 -- $ date +%s 00:00:56.076 09:12:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734077528 00:00:56.076 09:12:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734077528 00:00:56.076 09:12:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734077528 00:00:56.076 09:12:08 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734077528 00:00:56.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734077528_collect-cpu-load.pm.log 00:00:56.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734077528_collect-bmc-pm.bmc.pm.log 00:00:56.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734077528_collect-vmstat.pm.log 00:00:56.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734077528_collect-cpu-temp.pm.log 00:00:57.011 09:12:09 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:00:57.011 09:12:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:57.011 09:12:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:57.011 09:12:09 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:57.011 09:12:09 -- spdk/autobuild.sh@16 -- $ date -u 00:00:57.011 Fri Dec 13 08:12:09 AM UTC 2024 00:00:57.011 09:12:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:57.269 v25.01-pre-326-g575641720 00:00:57.269 09:12:09 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:57.269 09:12:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:57.269 09:12:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:57.269 09:12:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:57.269 09:12:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:57.269 09:12:09 -- common/autotest_common.sh@10 -- $ set +x 00:00:57.269 ************************************ 00:00:57.269 START TEST ubsan 00:00:57.269 ************************************ 00:00:57.269 09:12:09 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:57.269 using ubsan 00:00:57.269 00:00:57.269 real 0m0.000s 00:00:57.269 user 0m0.000s 00:00:57.269 sys 0m0.000s 00:00:57.269 09:12:09 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:57.269 09:12:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:57.269 ************************************ 00:00:57.269 END TEST ubsan 00:00:57.269 ************************************ 00:00:57.269 09:12:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:57.269 09:12:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:57.269 09:12:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:57.269 09:12:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:57.269 09:12:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:57.269 09:12:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:57.269 09:12:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:57.269 09:12:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:57.269 09:12:09 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:57.269 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:57.269 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:57.835 Using 'verbs' RDMA provider 00:01:10.608 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:22.880 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:22.880 Creating mk/config.mk...done. 00:01:22.880 Creating mk/cc.flags.mk...done. 00:01:22.880 Type 'make' to build. 00:01:22.880 09:12:34 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:22.880 09:12:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:22.880 09:12:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:22.880 09:12:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.880 ************************************ 00:01:22.880 START TEST make 00:01:22.880 ************************************ 00:01:22.881 09:12:34 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:22.881 make[1]: Nothing to be done for 'all'. 00:01:23.501 The Meson build system 00:01:23.501 Version: 1.5.0 00:01:23.501 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:23.501 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:23.501 Build type: native build 00:01:23.501 Project name: libvfio-user 00:01:23.501 Project version: 0.0.1 00:01:23.501 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:23.501 C linker for the host machine: cc ld.bfd 2.40-14 00:01:23.501 Host machine cpu family: x86_64 00:01:23.501 Host machine cpu: x86_64 00:01:23.501 Run-time dependency threads found: YES 00:01:23.501 Library dl found: YES 00:01:23.501 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:23.501 Run-time dependency json-c found: YES 0.17 00:01:23.501 Run-time dependency cmocka found: YES 1.1.7 00:01:23.501 Program pytest-3 found: NO 00:01:23.501 Program flake8 found: NO 00:01:23.501 Program misspell-fixer found: NO 00:01:23.501 Program restructuredtext-lint found: NO 00:01:23.501 Program valgrind found: YES (/usr/bin/valgrind) 00:01:23.501 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:23.501 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:23.501 Compiler for C supports arguments -Wwrite-strings: YES 00:01:23.501 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:23.501 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:23.501 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:23.501 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:23.501 Build targets in project: 8 00:01:23.501 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:23.501 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:23.501 00:01:23.501 libvfio-user 0.0.1 00:01:23.501 00:01:23.501 User defined options 00:01:23.501 buildtype : debug 00:01:23.501 default_library: shared 00:01:23.501 libdir : /usr/local/lib 00:01:23.501 00:01:23.501 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:24.068 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:24.326 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:24.326 [2/37] Compiling C object samples/null.p/null.c.o 00:01:24.326 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:24.326 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:24.326 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:24.326 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:24.326 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:24.326 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:24.326 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:24.326 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:24.326 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:24.326 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:24.326 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:24.326 [14/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:24.326 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:24.326 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:24.326 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:24.326 [18/37] Compiling C object samples/client.p/client.c.o 00:01:24.326 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:24.326 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:24.326 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:24.326 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:24.326 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:24.326 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:24.326 [25/37] Compiling C object samples/server.p/server.c.o 00:01:24.326 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:24.326 [27/37] Linking target samples/client 00:01:24.326 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:24.326 [29/37] Linking target test/unit_tests 00:01:24.326 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:24.585 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:24.585 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:24.585 [33/37] Linking target samples/server 00:01:24.585 [34/37] Linking target samples/gpio-pci-idio-16 00:01:24.585 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:24.585 [36/37] Linking target samples/null 00:01:24.585 [37/37] Linking target samples/lspci 00:01:24.585 INFO: autodetecting backend as ninja 00:01:24.585 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:24.844 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:25.103 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:25.103 ninja: no work to do. 00:01:30.373 The Meson build system 00:01:30.373 Version: 1.5.0 00:01:30.373 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:30.373 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:30.373 Build type: native build 00:01:30.373 Program cat found: YES (/usr/bin/cat) 00:01:30.373 Project name: DPDK 00:01:30.373 Project version: 24.03.0 00:01:30.373 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:30.373 C linker for the host machine: cc ld.bfd 2.40-14 00:01:30.373 Host machine cpu family: x86_64 00:01:30.373 Host machine cpu: x86_64 00:01:30.373 Message: ## Building in Developer Mode ## 00:01:30.373 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:30.373 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:30.373 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:30.373 Program python3 found: YES (/usr/bin/python3) 00:01:30.373 Program cat found: YES (/usr/bin/cat) 00:01:30.373 Compiler for C supports arguments -march=native: YES 00:01:30.373 Checking for size of "void *" : 8 00:01:30.373 Checking for size of "void *" : 8 (cached) 00:01:30.373 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:30.373 Library m found: YES 00:01:30.373 Library numa found: YES 00:01:30.373 Has header "numaif.h" : YES 00:01:30.373 Library fdt found: NO 00:01:30.373 Library execinfo found: NO 00:01:30.373 Has header "execinfo.h" : YES 00:01:30.373 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:30.373 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:30.373 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:30.373 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:30.373 Run-time dependency openssl found: YES 3.1.1 00:01:30.373 Run-time dependency libpcap found: YES 1.10.4 00:01:30.373 Has header "pcap.h" with dependency libpcap: YES 00:01:30.373 Compiler for C supports arguments -Wcast-qual: YES 00:01:30.373 Compiler for C supports arguments -Wdeprecated: YES 00:01:30.373 Compiler for C supports arguments -Wformat: YES 00:01:30.373 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:30.373 Compiler for C supports arguments -Wformat-security: NO 00:01:30.373 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:30.373 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:30.373 Compiler for C supports arguments -Wnested-externs: YES 00:01:30.373 Compiler for C supports arguments -Wold-style-definition: YES 00:01:30.373 Compiler for C supports arguments -Wpointer-arith: YES 00:01:30.373 Compiler for C supports arguments -Wsign-compare: YES 00:01:30.373 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:30.373 Compiler for C supports arguments -Wundef: YES 00:01:30.373 Compiler for C supports arguments -Wwrite-strings: YES 00:01:30.373 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:30.373 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:30.373 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:30.373 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:30.373 Program objdump found: YES (/usr/bin/objdump) 00:01:30.373 Compiler for C supports arguments -mavx512f: YES 00:01:30.373 Checking if "AVX512 checking" compiles: YES 00:01:30.373 Fetching value of define "__SSE4_2__" : 1 00:01:30.373 Fetching value of define "__AES__" : 1 00:01:30.373 Fetching value of define "__AVX__" : 1 00:01:30.373 Fetching value of define "__AVX2__" : 1 00:01:30.373 Fetching value of define "__AVX512BW__" : 1 00:01:30.373 Fetching value of define "__AVX512CD__" : 1 00:01:30.373 Fetching value of define "__AVX512DQ__" : 1 00:01:30.373 Fetching value of define "__AVX512F__" : 1 00:01:30.373 Fetching value of define "__AVX512VL__" : 1 00:01:30.373 Fetching value of define "__PCLMUL__" : 1 00:01:30.373 Fetching value of define "__RDRND__" : 1 00:01:30.373 Fetching value of define "__RDSEED__" : 1 00:01:30.373 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:30.373 Fetching value of define "__znver1__" : (undefined) 00:01:30.373 Fetching value of define "__znver2__" : (undefined) 00:01:30.373 Fetching value of define "__znver3__" : (undefined) 00:01:30.373 Fetching value of define "__znver4__" : (undefined) 00:01:30.373 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:30.373 Message: lib/log: Defining dependency "log" 00:01:30.373 Message: lib/kvargs: Defining dependency "kvargs" 00:01:30.373 Message: lib/telemetry: Defining dependency "telemetry" 00:01:30.373 Checking for function "getentropy" : NO 00:01:30.373 Message: lib/eal: Defining dependency "eal" 00:01:30.373 Message: lib/ring: Defining dependency "ring" 00:01:30.373 Message: lib/rcu: Defining dependency "rcu" 00:01:30.373 Message: lib/mempool: Defining dependency "mempool" 00:01:30.373 Message: lib/mbuf: Defining dependency "mbuf" 00:01:30.373 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:30.373 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:30.373 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:30.374 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:30.374 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:30.374 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:30.374 Compiler for C supports arguments -mpclmul: YES 00:01:30.374 Compiler for C supports arguments -maes: YES 00:01:30.374 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:30.374 Compiler for C supports arguments -mavx512bw: YES 00:01:30.374 Compiler for C supports arguments -mavx512dq: YES 00:01:30.374 Compiler for C supports arguments -mavx512vl: YES 00:01:30.374 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:30.374 Compiler for C supports arguments -mavx2: YES 00:01:30.374 Compiler for C supports arguments -mavx: YES 00:01:30.374 Message: lib/net: Defining dependency "net" 00:01:30.374 Message: lib/meter: Defining dependency "meter" 00:01:30.374 Message: lib/ethdev: Defining dependency "ethdev" 00:01:30.374 Message: lib/pci: Defining dependency "pci" 00:01:30.374 Message: lib/cmdline: Defining dependency "cmdline" 00:01:30.374 Message: lib/hash: Defining dependency "hash" 00:01:30.374 Message: lib/timer: Defining dependency "timer" 00:01:30.374 Message: lib/compressdev: Defining dependency "compressdev" 00:01:30.374 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:30.374 Message: lib/dmadev: Defining dependency "dmadev" 00:01:30.374 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:30.374 Message: lib/power: Defining dependency "power" 00:01:30.374 Message: lib/reorder: Defining dependency "reorder" 00:01:30.374 Message: lib/security: Defining dependency "security" 00:01:30.374 Has header "linux/userfaultfd.h" : YES 00:01:30.374 Has header "linux/vduse.h" : YES 00:01:30.374 Message: lib/vhost: Defining dependency "vhost" 00:01:30.374 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:30.374 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:30.374 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:30.374 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:30.374 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:30.374 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:30.374 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:30.374 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:30.374 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:30.374 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:30.374 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:30.374 Configuring doxy-api-html.conf using configuration 00:01:30.374 Configuring doxy-api-man.conf using configuration 00:01:30.374 Program mandb found: YES (/usr/bin/mandb) 00:01:30.374 Program sphinx-build found: NO 00:01:30.374 Configuring rte_build_config.h using configuration 00:01:30.374 Message: 00:01:30.374 ================= 00:01:30.374 Applications Enabled 00:01:30.374 ================= 00:01:30.374 00:01:30.374 apps: 00:01:30.374 00:01:30.374 00:01:30.374 Message: 00:01:30.374 ================= 00:01:30.374 Libraries Enabled 00:01:30.374 ================= 00:01:30.374 00:01:30.374 libs: 00:01:30.374 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:30.374 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:30.374 cryptodev, dmadev, power, reorder, security, vhost, 00:01:30.374 00:01:30.374 Message: 00:01:30.374 =============== 00:01:30.374 Drivers Enabled 00:01:30.374 =============== 00:01:30.374 00:01:30.374 common: 00:01:30.374 00:01:30.374 bus: 00:01:30.374 pci, vdev, 00:01:30.374 mempool: 00:01:30.374 ring, 00:01:30.374 dma: 00:01:30.374 00:01:30.374 net: 00:01:30.374 00:01:30.374 crypto: 00:01:30.374 00:01:30.374 compress: 00:01:30.374 00:01:30.374 vdpa: 00:01:30.374 00:01:30.374 00:01:30.374 Message: 00:01:30.374 ================= 00:01:30.374 Content Skipped 00:01:30.374 ================= 00:01:30.374 00:01:30.374 apps: 00:01:30.374 dumpcap: explicitly disabled via build config 00:01:30.374 graph: explicitly disabled via build config 00:01:30.374 pdump: explicitly disabled via build config 00:01:30.374 proc-info: explicitly disabled via build config 00:01:30.374 test-acl: explicitly disabled via build config 00:01:30.374 test-bbdev: explicitly disabled via build config 00:01:30.374 test-cmdline: explicitly disabled via build config 00:01:30.374 test-compress-perf: explicitly disabled via build config 00:01:30.374 test-crypto-perf: explicitly disabled via build config 00:01:30.374 test-dma-perf: explicitly disabled via build config 00:01:30.374 test-eventdev: explicitly disabled via build config 00:01:30.374 test-fib: explicitly disabled via build config 00:01:30.374 test-flow-perf: explicitly disabled via build config 00:01:30.374 test-gpudev: explicitly disabled via build config 00:01:30.374 test-mldev: explicitly disabled via build config 00:01:30.374 test-pipeline: explicitly disabled via build config 00:01:30.374 test-pmd: explicitly disabled via build config 00:01:30.374 test-regex: explicitly disabled via build config 00:01:30.374 test-sad: explicitly disabled via build config 00:01:30.374 test-security-perf: explicitly disabled via build config 00:01:30.374 00:01:30.374 libs: 00:01:30.374 argparse: explicitly disabled via build config 00:01:30.374 metrics: explicitly disabled via build config 00:01:30.374 acl: explicitly disabled via build config 00:01:30.374 bbdev: explicitly disabled via build config 00:01:30.374 bitratestats: explicitly disabled via build config 00:01:30.374 bpf: explicitly disabled via build config 00:01:30.374 cfgfile: explicitly disabled via build config 00:01:30.374 distributor: explicitly disabled via build config 00:01:30.374 efd: explicitly disabled via build config 00:01:30.374 eventdev: explicitly disabled via build config 00:01:30.374 dispatcher: explicitly disabled via build config 00:01:30.374 gpudev: explicitly disabled via build config 00:01:30.374 gro: explicitly disabled via build config 00:01:30.374 gso: explicitly disabled via build config 00:01:30.374 ip_frag: explicitly disabled via build config 00:01:30.374 jobstats: explicitly disabled via build config 00:01:30.374 latencystats: explicitly disabled via build config 00:01:30.374 lpm: explicitly disabled via build config 00:01:30.374 member: explicitly disabled via build config 00:01:30.374 pcapng: explicitly disabled via build config 00:01:30.374 rawdev: explicitly disabled via build config 00:01:30.374 regexdev: explicitly disabled via build config 00:01:30.374 mldev: explicitly disabled via build config 00:01:30.374 rib: explicitly disabled via build config 00:01:30.374 sched: explicitly disabled via build config 00:01:30.374 stack: explicitly disabled via build config 00:01:30.374 ipsec: explicitly disabled via build config 00:01:30.374 pdcp: explicitly disabled via build config 00:01:30.374 fib: explicitly disabled via build config 00:01:30.374 port: explicitly disabled via build config 00:01:30.374 pdump: explicitly disabled via build config 00:01:30.374 table: explicitly disabled via build config 00:01:30.374 pipeline: explicitly disabled via build config 00:01:30.374 graph: explicitly disabled via build config 00:01:30.374 node: explicitly disabled via build config 00:01:30.374 00:01:30.374 drivers: 00:01:30.374 common/cpt: not in enabled drivers build config 00:01:30.374 common/dpaax: not in enabled drivers build config 00:01:30.374 common/iavf: not in enabled drivers build config 00:01:30.374 common/idpf: not in enabled drivers build config 00:01:30.374 common/ionic: not in enabled drivers build config 00:01:30.374 common/mvep: not in enabled drivers build config 00:01:30.374 common/octeontx: not in enabled drivers build config 00:01:30.374 bus/auxiliary: not in enabled drivers build config 00:01:30.374 bus/cdx: not in enabled drivers build config 00:01:30.374 bus/dpaa: not in enabled drivers build config 00:01:30.374 bus/fslmc: not in enabled drivers build config 00:01:30.374 bus/ifpga: not in enabled drivers build config 00:01:30.374 bus/platform: not in enabled drivers build config 00:01:30.374 bus/uacce: not in enabled drivers build config 00:01:30.374 bus/vmbus: not in enabled drivers build config 00:01:30.374 common/cnxk: not in enabled drivers build config 00:01:30.374 common/mlx5: not in enabled drivers build config 00:01:30.374 common/nfp: not in enabled drivers build config 00:01:30.374 common/nitrox: not in enabled drivers build config 00:01:30.374 common/qat: not in enabled drivers build config 00:01:30.374 common/sfc_efx: not in enabled drivers build config 00:01:30.374 mempool/bucket: not in enabled drivers build config 00:01:30.374 mempool/cnxk: not in enabled drivers build config 00:01:30.374 mempool/dpaa: not in enabled drivers build config 00:01:30.374 mempool/dpaa2: not in enabled drivers build config 00:01:30.374 mempool/octeontx: not in enabled drivers build config 00:01:30.374 mempool/stack: not in enabled drivers build config 00:01:30.374 dma/cnxk: not in enabled drivers build config 00:01:30.374 dma/dpaa: not in enabled drivers build config 00:01:30.374 dma/dpaa2: not in enabled drivers build config 00:01:30.374 dma/hisilicon: not in enabled drivers build config 00:01:30.375 dma/idxd: not in enabled drivers build config 00:01:30.375 dma/ioat: not in enabled drivers build config 00:01:30.375 dma/skeleton: not in enabled drivers build config 00:01:30.375 net/af_packet: not in enabled drivers build config 00:01:30.375 net/af_xdp: not in enabled drivers build config 00:01:30.375 net/ark: not in enabled drivers build config 00:01:30.375 net/atlantic: not in enabled drivers build config 00:01:30.375 net/avp: not in enabled drivers build config 00:01:30.375 net/axgbe: not in enabled drivers build config 00:01:30.375 net/bnx2x: not in enabled drivers build config 00:01:30.375 net/bnxt: not in enabled drivers build config 00:01:30.375 net/bonding: not in enabled drivers build config 00:01:30.375 net/cnxk: not in enabled drivers build config 00:01:30.375 net/cpfl: not in enabled drivers build config 00:01:30.375 net/cxgbe: not in enabled drivers build config 00:01:30.375 net/dpaa: not in enabled drivers build config 00:01:30.375 net/dpaa2: not in enabled drivers build config 00:01:30.375 net/e1000: not in enabled drivers build config 00:01:30.375 net/ena: not in enabled drivers build config 00:01:30.375 net/enetc: not in enabled drivers build config 00:01:30.375 net/enetfec: not in enabled drivers build config 00:01:30.375 net/enic: not in enabled drivers build config 00:01:30.375 net/failsafe: not in enabled drivers build config 00:01:30.375 net/fm10k: not in enabled drivers build config 00:01:30.375 net/gve: not in enabled drivers build config 00:01:30.375 net/hinic: not in enabled drivers build config 00:01:30.375 net/hns3: not in enabled drivers build config 00:01:30.375 net/i40e: not in enabled drivers build config 00:01:30.375 net/iavf: not in enabled drivers build config 00:01:30.375 net/ice: not in enabled drivers build config 00:01:30.375 net/idpf: not in enabled drivers build config 00:01:30.375 net/igc: not in enabled drivers build config 00:01:30.375 net/ionic: not in enabled drivers build config 00:01:30.375 net/ipn3ke: not in enabled drivers build config 00:01:30.375 net/ixgbe: not in enabled drivers build config 00:01:30.375 net/mana: not in enabled drivers build config 00:01:30.375 net/memif: not in enabled drivers build config 00:01:30.375 net/mlx4: not in enabled drivers build config 00:01:30.375 net/mlx5: not in enabled drivers build config 00:01:30.375 net/mvneta: not in enabled drivers build config 00:01:30.375 net/mvpp2: not in enabled drivers build config 00:01:30.375 net/netvsc: not in enabled drivers build config 00:01:30.375 net/nfb: not in enabled drivers build config 00:01:30.375 net/nfp: not in enabled drivers build config 00:01:30.375 net/ngbe: not in enabled drivers build config 00:01:30.375 net/null: not in enabled drivers build config 00:01:30.375 net/octeontx: not in enabled drivers build config 00:01:30.375 net/octeon_ep: not in enabled drivers build config 00:01:30.375 net/pcap: not in enabled drivers build config 00:01:30.375 net/pfe: not in enabled drivers build config 00:01:30.375 net/qede: not in enabled drivers build config 00:01:30.375 net/ring: not in enabled drivers build config 00:01:30.375 net/sfc: not in enabled drivers build config 00:01:30.375 net/softnic: not in enabled drivers build config 00:01:30.375 net/tap: not in enabled drivers build config 00:01:30.375 net/thunderx: not in enabled drivers build config 00:01:30.375 net/txgbe: not in enabled drivers build config 00:01:30.375 net/vdev_netvsc: not in enabled drivers build config 00:01:30.375 net/vhost: not in enabled drivers build config 00:01:30.375 net/virtio: not in enabled drivers build config 00:01:30.375 net/vmxnet3: not in enabled drivers build config 00:01:30.375 raw/*: missing internal dependency, "rawdev" 00:01:30.375 crypto/armv8: not in enabled drivers build config 00:01:30.375 crypto/bcmfs: not in enabled drivers build config 00:01:30.375 crypto/caam_jr: not in enabled drivers build config 00:01:30.375 crypto/ccp: not in enabled drivers build config 00:01:30.375 crypto/cnxk: not in enabled drivers build config 00:01:30.375 crypto/dpaa_sec: not in enabled drivers build config 00:01:30.375 crypto/dpaa2_sec: not in enabled drivers build config 00:01:30.375 crypto/ipsec_mb: not in enabled drivers build config 00:01:30.375 crypto/mlx5: not in enabled drivers build config 00:01:30.375 crypto/mvsam: not in enabled drivers build config 00:01:30.375 crypto/nitrox: not in enabled drivers build config 00:01:30.375 crypto/null: not in enabled drivers build config 00:01:30.375 crypto/octeontx: not in enabled drivers build config 00:01:30.375 crypto/openssl: not in enabled drivers build config 00:01:30.375 crypto/scheduler: not in enabled drivers build config 00:01:30.375 crypto/uadk: not in enabled drivers build config 00:01:30.375 crypto/virtio: not in enabled drivers build config 00:01:30.375 compress/isal: not in enabled drivers build config 00:01:30.375 compress/mlx5: not in enabled drivers build config 00:01:30.375 compress/nitrox: not in enabled drivers build config 00:01:30.375 compress/octeontx: not in enabled drivers build config 00:01:30.375 compress/zlib: not in enabled drivers build config 00:01:30.375 regex/*: missing internal dependency, "regexdev" 00:01:30.375 ml/*: missing internal dependency, "mldev" 00:01:30.375 vdpa/ifc: not in enabled drivers build config 00:01:30.375 vdpa/mlx5: not in enabled drivers build config 00:01:30.375 vdpa/nfp: not in enabled drivers build config 00:01:30.375 vdpa/sfc: not in enabled drivers build config 00:01:30.375 event/*: missing internal dependency, "eventdev" 00:01:30.375 baseband/*: missing internal dependency, "bbdev" 00:01:30.375 gpu/*: missing internal dependency, "gpudev" 00:01:30.375 00:01:30.375 00:01:30.633 Build targets in project: 85 00:01:30.633 00:01:30.633 DPDK 24.03.0 00:01:30.633 00:01:30.633 User defined options 00:01:30.633 buildtype : debug 00:01:30.633 default_library : shared 00:01:30.633 libdir : lib 00:01:30.633 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:30.633 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:30.633 c_link_args : 00:01:30.633 cpu_instruction_set: native 00:01:30.633 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:30.633 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:30.633 enable_docs : false 00:01:30.633 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:30.633 enable_kmods : false 00:01:30.633 max_lcores : 128 00:01:30.633 tests : false 00:01:30.633 00:01:30.633 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.899 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:31.166 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:31.166 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:31.166 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:31.166 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:31.166 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:31.166 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:31.166 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:31.166 [8/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:31.166 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:31.166 [10/268] Linking static target lib/librte_kvargs.a 00:01:31.166 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:31.166 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:31.166 [13/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:31.166 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:31.166 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:31.166 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:31.166 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:31.166 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:31.166 [19/268] Linking static target lib/librte_log.a 00:01:31.425 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:31.425 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:31.425 [22/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:31.425 [23/268] Linking static target lib/librte_pci.a 00:01:31.425 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:31.425 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:31.425 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:31.692 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:31.692 [28/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:31.692 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:31.692 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:31.692 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:31.692 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:31.692 [33/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:31.692 [34/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:31.692 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:31.692 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:31.692 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:31.692 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:31.692 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:31.692 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:31.692 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:31.692 [42/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:31.692 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:31.692 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:31.692 [45/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:31.692 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:31.692 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:31.692 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:31.692 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:31.692 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:31.692 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:31.692 [52/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:31.692 [53/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:31.692 [54/268] Linking static target lib/librte_meter.a 00:01:31.692 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:31.692 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:31.692 [57/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:31.692 [58/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:31.692 [59/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:31.692 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:31.692 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:31.692 [62/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:31.692 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:31.692 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:31.692 [65/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.692 [66/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:31.692 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:31.692 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:31.692 [69/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:31.692 [70/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:31.692 [71/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:31.692 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:31.692 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:31.692 [74/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:31.692 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:31.692 [76/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:31.692 [77/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:31.692 [78/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:31.692 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:31.692 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:31.692 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:31.692 [82/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:31.692 [83/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:31.692 [84/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:31.692 [85/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:31.951 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:31.951 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:31.951 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:31.951 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:31.951 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:31.951 [91/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:31.951 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:31.951 [93/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:31.951 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:31.951 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:31.951 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:31.951 [97/268] Linking static target lib/librte_ring.a 00:01:31.951 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:31.951 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:31.951 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:31.951 [101/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:31.951 [102/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:31.951 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:31.951 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:31.951 [105/268] Linking static target lib/librte_mempool.a 00:01:31.951 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:31.951 [107/268] Linking static target lib/librte_telemetry.a 00:01:31.952 [108/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:31.952 [109/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:31.952 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:31.952 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:31.952 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:31.952 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:31.952 [114/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:31.952 [115/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:31.952 [116/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.952 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:31.952 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:31.952 [119/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:31.952 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:31.952 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:31.952 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:31.952 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:31.952 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:31.952 [125/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:31.952 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:31.952 [127/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:31.952 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:31.952 [129/268] Linking static target lib/librte_net.a 00:01:31.952 [130/268] Linking static target lib/librte_cmdline.a 00:01:31.952 [131/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:31.952 [132/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:31.952 [133/268] Linking static target lib/librte_rcu.a 00:01:31.952 [134/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:31.952 [135/268] Linking static target lib/librte_eal.a 00:01:31.952 [136/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.952 [137/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:31.952 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:32.210 [139/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:32.210 [140/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:32.210 [141/268] Linking static target lib/librte_mbuf.a 00:01:32.210 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:32.210 [143/268] Linking static target lib/librte_timer.a 00:01:32.210 [144/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:32.210 [145/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.210 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:32.210 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:32.210 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:32.210 [149/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.210 [150/268] Linking target lib/librte_log.so.24.1 00:01:32.210 [151/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:32.210 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:32.210 [153/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:32.210 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:32.210 [155/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:32.210 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:32.210 [157/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:32.210 [158/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:32.210 [159/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:32.210 [160/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:32.210 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:32.211 [162/268] Linking static target lib/librte_compressdev.a 00:01:32.211 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:32.211 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:32.211 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:32.211 [166/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.211 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:32.211 [168/268] Linking static target lib/librte_dmadev.a 00:01:32.211 [169/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:32.211 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:32.211 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:32.211 [172/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:32.211 [173/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:32.211 [174/268] Linking static target lib/librte_reorder.a 00:01:32.211 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:32.211 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:32.211 [177/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.211 [178/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.211 [179/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:32.211 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:32.211 [181/268] Linking target lib/librte_kvargs.so.24.1 00:01:32.211 [182/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:32.468 [183/268] Linking static target lib/librte_security.a 00:01:32.468 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:32.468 [185/268] Linking target lib/librte_telemetry.so.24.1 00:01:32.468 [186/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:32.468 [187/268] Linking static target lib/librte_power.a 00:01:32.468 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:32.468 [189/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:32.468 [190/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:32.468 [191/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:32.468 [192/268] Linking static target drivers/librte_bus_vdev.a 00:01:32.468 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:32.468 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:32.468 [195/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:32.468 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:32.468 [197/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:32.468 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:32.468 [199/268] Linking static target lib/librte_hash.a 00:01:32.468 [200/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.468 [201/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:32.468 [202/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:32.468 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:32.468 [204/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:32.468 [205/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.469 [206/268] Linking static target lib/librte_cryptodev.a 00:01:32.469 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:32.469 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:32.469 [209/268] Linking static target drivers/librte_bus_pci.a 00:01:32.727 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:32.727 [211/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:32.728 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:32.728 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:32.728 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.728 [215/268] Linking static target drivers/librte_mempool_ring.a 00:01:32.728 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.728 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.986 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.986 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.986 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.986 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:32.986 [222/268] Linking static target lib/librte_ethdev.a 00:01:32.986 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.245 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:33.245 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.245 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.245 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.618 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.618 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:34.618 [230/268] Linking static target lib/librte_vhost.a 00:01:35.993 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.251 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.508 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.508 [234/268] Linking target lib/librte_eal.so.24.1 00:01:41.766 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:41.766 [236/268] Linking target lib/librte_ring.so.24.1 00:01:41.766 [237/268] Linking target lib/librte_meter.so.24.1 00:01:41.766 [238/268] Linking target lib/librte_timer.so.24.1 00:01:41.766 [239/268] Linking target lib/librte_pci.so.24.1 00:01:41.766 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:41.766 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:41.766 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:41.766 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:41.766 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:41.766 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:41.766 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:41.766 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:42.022 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:42.022 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:42.022 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:42.022 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:42.022 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:42.022 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:42.278 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:42.278 [255/268] Linking target lib/librte_net.so.24.1 00:01:42.278 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:01:42.278 [257/268] Linking target lib/librte_reorder.so.24.1 00:01:42.278 [258/268] Linking target lib/librte_compressdev.so.24.1 00:01:42.278 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:42.278 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:42.536 [261/268] Linking target lib/librte_hash.so.24.1 00:01:42.536 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:42.536 [263/268] Linking target lib/librte_security.so.24.1 00:01:42.536 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:42.536 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:42.536 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:42.536 [267/268] Linking target lib/librte_power.so.24.1 00:01:42.536 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:42.536 INFO: autodetecting backend as ninja 00:01:42.536 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:54.736 CC lib/ut/ut.o 00:01:54.736 CC lib/ut_mock/mock.o 00:01:54.736 CC lib/log/log.o 00:01:54.736 CC lib/log/log_flags.o 00:01:54.736 CC lib/log/log_deprecated.o 00:01:54.736 LIB libspdk_ut.a 00:01:54.736 LIB libspdk_ut_mock.a 00:01:54.736 LIB libspdk_log.a 00:01:54.736 SO libspdk_ut.so.2.0 00:01:54.736 SO libspdk_ut_mock.so.6.0 00:01:54.736 SO libspdk_log.so.7.1 00:01:54.736 SYMLINK libspdk_ut.so 00:01:54.736 SYMLINK libspdk_ut_mock.so 00:01:54.736 SYMLINK libspdk_log.so 00:01:54.736 CC lib/util/base64.o 00:01:54.736 CC lib/util/bit_array.o 00:01:54.736 CC lib/util/crc32.o 00:01:54.736 CC lib/util/cpuset.o 00:01:54.736 CC lib/util/crc16.o 00:01:54.736 CC lib/util/crc32c.o 00:01:54.736 CC lib/util/crc32_ieee.o 00:01:54.736 CC lib/util/fd.o 00:01:54.736 CC lib/util/crc64.o 00:01:54.736 CC lib/util/dif.o 00:01:54.736 CC lib/util/fd_group.o 00:01:54.736 CC lib/util/file.o 00:01:54.736 CC lib/util/hexlify.o 00:01:54.736 CC lib/util/iov.o 00:01:54.736 CC lib/util/math.o 00:01:54.736 CC lib/util/net.o 00:01:54.736 CC lib/util/pipe.o 00:01:54.736 CC lib/util/xor.o 00:01:54.736 CC lib/util/strerror_tls.o 00:01:54.736 CC lib/ioat/ioat.o 00:01:54.736 CC lib/util/string.o 00:01:54.736 CC lib/util/zipf.o 00:01:54.736 CC lib/util/uuid.o 00:01:54.736 CC lib/util/md5.o 00:01:54.736 CXX lib/trace_parser/trace.o 00:01:54.736 CC lib/dma/dma.o 00:01:54.736 CC lib/vfio_user/host/vfio_user_pci.o 00:01:54.736 CC lib/vfio_user/host/vfio_user.o 00:01:54.736 LIB libspdk_dma.a 00:01:54.736 SO libspdk_dma.so.5.0 00:01:54.736 LIB libspdk_ioat.a 00:01:54.736 SYMLINK libspdk_dma.so 00:01:54.736 SO libspdk_ioat.so.7.0 00:01:54.736 LIB libspdk_vfio_user.a 00:01:54.736 SYMLINK libspdk_ioat.so 00:01:54.736 SO libspdk_vfio_user.so.5.0 00:01:54.736 SYMLINK libspdk_vfio_user.so 00:01:54.736 LIB libspdk_util.a 00:01:54.736 SO libspdk_util.so.10.1 00:01:54.736 SYMLINK libspdk_util.so 00:01:54.736 LIB libspdk_trace_parser.a 00:01:54.736 SO libspdk_trace_parser.so.6.0 00:01:54.736 SYMLINK libspdk_trace_parser.so 00:01:54.736 CC lib/vmd/vmd.o 00:01:54.736 CC lib/vmd/led.o 00:01:54.736 CC lib/json/json_parse.o 00:01:54.736 CC lib/json/json_util.o 00:01:54.736 CC lib/json/json_write.o 00:01:54.736 CC lib/idxd/idxd.o 00:01:54.736 CC lib/idxd/idxd_user.o 00:01:54.736 CC lib/idxd/idxd_kernel.o 00:01:54.736 CC lib/conf/conf.o 00:01:54.736 CC lib/env_dpdk/env.o 00:01:54.736 CC lib/rdma_utils/rdma_utils.o 00:01:54.736 CC lib/env_dpdk/memory.o 00:01:54.736 CC lib/env_dpdk/pci.o 00:01:54.736 CC lib/env_dpdk/init.o 00:01:54.736 CC lib/env_dpdk/threads.o 00:01:54.736 CC lib/env_dpdk/pci_ioat.o 00:01:54.736 CC lib/env_dpdk/pci_virtio.o 00:01:54.736 CC lib/env_dpdk/pci_vmd.o 00:01:54.736 CC lib/env_dpdk/pci_idxd.o 00:01:54.736 CC lib/env_dpdk/pci_event.o 00:01:54.736 CC lib/env_dpdk/sigbus_handler.o 00:01:54.736 CC lib/env_dpdk/pci_dpdk.o 00:01:54.736 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:54.736 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:54.736 LIB libspdk_conf.a 00:01:54.736 SO libspdk_conf.so.6.0 00:01:54.736 LIB libspdk_rdma_utils.a 00:01:54.736 LIB libspdk_json.a 00:01:54.736 SO libspdk_rdma_utils.so.1.0 00:01:54.736 SO libspdk_json.so.6.0 00:01:54.736 SYMLINK libspdk_conf.so 00:01:54.736 SYMLINK libspdk_rdma_utils.so 00:01:54.736 SYMLINK libspdk_json.so 00:01:54.994 LIB libspdk_idxd.a 00:01:54.994 LIB libspdk_vmd.a 00:01:54.994 SO libspdk_vmd.so.6.0 00:01:54.994 SO libspdk_idxd.so.12.1 00:01:54.994 SYMLINK libspdk_idxd.so 00:01:54.994 SYMLINK libspdk_vmd.so 00:01:54.994 CC lib/rdma_provider/common.o 00:01:54.994 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:54.994 CC lib/jsonrpc/jsonrpc_server.o 00:01:54.994 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:54.994 CC lib/jsonrpc/jsonrpc_client.o 00:01:54.994 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:55.252 LIB libspdk_rdma_provider.a 00:01:55.252 SO libspdk_rdma_provider.so.7.0 00:01:55.252 LIB libspdk_jsonrpc.a 00:01:55.252 SO libspdk_jsonrpc.so.6.0 00:01:55.252 SYMLINK libspdk_rdma_provider.so 00:01:55.512 SYMLINK libspdk_jsonrpc.so 00:01:55.512 LIB libspdk_env_dpdk.a 00:01:55.512 SO libspdk_env_dpdk.so.15.1 00:01:55.771 SYMLINK libspdk_env_dpdk.so 00:01:55.771 CC lib/rpc/rpc.o 00:01:55.771 LIB libspdk_rpc.a 00:01:56.030 SO libspdk_rpc.so.6.0 00:01:56.030 SYMLINK libspdk_rpc.so 00:01:56.288 CC lib/keyring/keyring.o 00:01:56.288 CC lib/notify/notify.o 00:01:56.288 CC lib/keyring/keyring_rpc.o 00:01:56.288 CC lib/notify/notify_rpc.o 00:01:56.288 CC lib/trace/trace.o 00:01:56.288 CC lib/trace/trace_rpc.o 00:01:56.288 CC lib/trace/trace_flags.o 00:01:56.289 LIB libspdk_notify.a 00:01:56.548 SO libspdk_notify.so.6.0 00:01:56.548 LIB libspdk_keyring.a 00:01:56.548 SO libspdk_keyring.so.2.0 00:01:56.548 LIB libspdk_trace.a 00:01:56.548 SYMLINK libspdk_notify.so 00:01:56.548 SO libspdk_trace.so.11.0 00:01:56.548 SYMLINK libspdk_keyring.so 00:01:56.548 SYMLINK libspdk_trace.so 00:01:56.807 CC lib/sock/sock.o 00:01:56.807 CC lib/sock/sock_rpc.o 00:01:56.807 CC lib/thread/thread.o 00:01:56.807 CC lib/thread/iobuf.o 00:01:57.066 LIB libspdk_sock.a 00:01:57.324 SO libspdk_sock.so.10.0 00:01:57.324 SYMLINK libspdk_sock.so 00:01:57.582 CC lib/nvme/nvme_ctrlr.o 00:01:57.582 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:57.582 CC lib/nvme/nvme_fabric.o 00:01:57.582 CC lib/nvme/nvme_ns_cmd.o 00:01:57.582 CC lib/nvme/nvme_ns.o 00:01:57.582 CC lib/nvme/nvme_pcie.o 00:01:57.582 CC lib/nvme/nvme_qpair.o 00:01:57.582 CC lib/nvme/nvme_pcie_common.o 00:01:57.582 CC lib/nvme/nvme.o 00:01:57.582 CC lib/nvme/nvme_quirks.o 00:01:57.582 CC lib/nvme/nvme_transport.o 00:01:57.582 CC lib/nvme/nvme_discovery.o 00:01:57.582 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:57.582 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:57.582 CC lib/nvme/nvme_tcp.o 00:01:57.582 CC lib/nvme/nvme_opal.o 00:01:57.582 CC lib/nvme/nvme_io_msg.o 00:01:57.582 CC lib/nvme/nvme_poll_group.o 00:01:57.582 CC lib/nvme/nvme_zns.o 00:01:57.582 CC lib/nvme/nvme_stubs.o 00:01:57.582 CC lib/nvme/nvme_auth.o 00:01:57.582 CC lib/nvme/nvme_cuse.o 00:01:57.582 CC lib/nvme/nvme_vfio_user.o 00:01:57.582 CC lib/nvme/nvme_rdma.o 00:01:58.148 LIB libspdk_thread.a 00:01:58.148 SO libspdk_thread.so.11.0 00:01:58.148 SYMLINK libspdk_thread.so 00:01:58.406 CC lib/fsdev/fsdev_io.o 00:01:58.406 CC lib/fsdev/fsdev.o 00:01:58.406 CC lib/fsdev/fsdev_rpc.o 00:01:58.406 CC lib/virtio/virtio.o 00:01:58.406 CC lib/blob/blobstore.o 00:01:58.406 CC lib/virtio/virtio_vhost_user.o 00:01:58.406 CC lib/blob/request.o 00:01:58.406 CC lib/virtio/virtio_vfio_user.o 00:01:58.406 CC lib/blob/blob_bs_dev.o 00:01:58.406 CC lib/blob/zeroes.o 00:01:58.406 CC lib/virtio/virtio_pci.o 00:01:58.406 CC lib/vfu_tgt/tgt_endpoint.o 00:01:58.406 CC lib/vfu_tgt/tgt_rpc.o 00:01:58.406 CC lib/accel/accel_rpc.o 00:01:58.406 CC lib/accel/accel_sw.o 00:01:58.406 CC lib/accel/accel.o 00:01:58.406 CC lib/init/json_config.o 00:01:58.406 CC lib/init/subsystem.o 00:01:58.406 CC lib/init/subsystem_rpc.o 00:01:58.406 CC lib/init/rpc.o 00:01:58.664 LIB libspdk_init.a 00:01:58.664 LIB libspdk_virtio.a 00:01:58.664 SO libspdk_init.so.6.0 00:01:58.664 LIB libspdk_vfu_tgt.a 00:01:58.664 SO libspdk_virtio.so.7.0 00:01:58.664 SO libspdk_vfu_tgt.so.3.0 00:01:58.664 SYMLINK libspdk_init.so 00:01:58.664 SYMLINK libspdk_virtio.so 00:01:58.664 SYMLINK libspdk_vfu_tgt.so 00:01:58.922 LIB libspdk_fsdev.a 00:01:58.922 SO libspdk_fsdev.so.2.0 00:01:58.922 SYMLINK libspdk_fsdev.so 00:01:58.922 CC lib/event/reactor.o 00:01:58.922 CC lib/event/app.o 00:01:58.922 CC lib/event/app_rpc.o 00:01:58.922 CC lib/event/log_rpc.o 00:01:58.922 CC lib/event/scheduler_static.o 00:01:59.180 LIB libspdk_accel.a 00:01:59.181 SO libspdk_accel.so.16.0 00:01:59.181 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:59.181 LIB libspdk_nvme.a 00:01:59.439 SYMLINK libspdk_accel.so 00:01:59.439 LIB libspdk_event.a 00:01:59.439 SO libspdk_nvme.so.15.0 00:01:59.439 SO libspdk_event.so.14.0 00:01:59.439 SYMLINK libspdk_event.so 00:01:59.698 SYMLINK libspdk_nvme.so 00:01:59.698 CC lib/bdev/bdev_rpc.o 00:01:59.698 CC lib/bdev/bdev.o 00:01:59.698 CC lib/bdev/part.o 00:01:59.698 CC lib/bdev/bdev_zone.o 00:01:59.698 CC lib/bdev/scsi_nvme.o 00:01:59.698 LIB libspdk_fuse_dispatcher.a 00:01:59.698 SO libspdk_fuse_dispatcher.so.1.0 00:01:59.956 SYMLINK libspdk_fuse_dispatcher.so 00:02:00.524 LIB libspdk_blob.a 00:02:00.524 SO libspdk_blob.so.12.0 00:02:00.783 SYMLINK libspdk_blob.so 00:02:01.042 CC lib/lvol/lvol.o 00:02:01.042 CC lib/blobfs/blobfs.o 00:02:01.042 CC lib/blobfs/tree.o 00:02:01.610 LIB libspdk_bdev.a 00:02:01.610 SO libspdk_bdev.so.17.0 00:02:01.610 LIB libspdk_blobfs.a 00:02:01.610 LIB libspdk_lvol.a 00:02:01.610 SO libspdk_blobfs.so.11.0 00:02:01.610 SYMLINK libspdk_bdev.so 00:02:01.610 SO libspdk_lvol.so.11.0 00:02:01.610 SYMLINK libspdk_blobfs.so 00:02:01.610 SYMLINK libspdk_lvol.so 00:02:01.869 CC lib/scsi/dev.o 00:02:01.869 CC lib/scsi/port.o 00:02:01.869 CC lib/scsi/lun.o 00:02:01.869 CC lib/scsi/scsi.o 00:02:01.869 CC lib/scsi/scsi_bdev.o 00:02:01.869 CC lib/scsi/scsi_pr.o 00:02:01.869 CC lib/scsi/scsi_rpc.o 00:02:01.869 CC lib/scsi/task.o 00:02:01.869 CC lib/ublk/ublk.o 00:02:01.869 CC lib/ublk/ublk_rpc.o 00:02:01.869 CC lib/nvmf/ctrlr.o 00:02:01.869 CC lib/nvmf/ctrlr_bdev.o 00:02:01.869 CC lib/nvmf/ctrlr_discovery.o 00:02:01.869 CC lib/nvmf/subsystem.o 00:02:01.869 CC lib/nvmf/nvmf.o 00:02:01.869 CC lib/nbd/nbd.o 00:02:01.869 CC lib/nvmf/nvmf_rpc.o 00:02:01.869 CC lib/nvmf/transport.o 00:02:01.869 CC lib/nvmf/tcp.o 00:02:01.869 CC lib/nbd/nbd_rpc.o 00:02:01.869 CC lib/ftl/ftl_core.o 00:02:01.869 CC lib/nvmf/mdns_server.o 00:02:01.869 CC lib/nvmf/stubs.o 00:02:01.869 CC lib/ftl/ftl_init.o 00:02:01.869 CC lib/ftl/ftl_layout.o 00:02:01.869 CC lib/nvmf/vfio_user.o 00:02:01.869 CC lib/ftl/ftl_debug.o 00:02:01.869 CC lib/ftl/ftl_sb.o 00:02:01.869 CC lib/nvmf/rdma.o 00:02:01.869 CC lib/ftl/ftl_io.o 00:02:01.869 CC lib/nvmf/auth.o 00:02:01.869 CC lib/ftl/ftl_l2p.o 00:02:01.869 CC lib/ftl/ftl_l2p_flat.o 00:02:01.869 CC lib/ftl/ftl_nv_cache.o 00:02:01.869 CC lib/ftl/ftl_band.o 00:02:01.869 CC lib/ftl/ftl_band_ops.o 00:02:01.869 CC lib/ftl/ftl_writer.o 00:02:01.869 CC lib/ftl/ftl_rq.o 00:02:01.869 CC lib/ftl/ftl_reloc.o 00:02:01.869 CC lib/ftl/ftl_l2p_cache.o 00:02:01.869 CC lib/ftl/ftl_p2l.o 00:02:01.869 CC lib/ftl/ftl_p2l_log.o 00:02:01.869 CC lib/ftl/mngt/ftl_mngt.o 00:02:01.869 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:01.869 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:01.869 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:01.869 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:01.869 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:01.869 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:01.869 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:01.869 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:01.869 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:01.869 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:01.869 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:01.869 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:01.869 CC lib/ftl/utils/ftl_md.o 00:02:01.869 CC lib/ftl/utils/ftl_conf.o 00:02:01.869 CC lib/ftl/utils/ftl_bitmap.o 00:02:01.869 CC lib/ftl/utils/ftl_mempool.o 00:02:01.869 CC lib/ftl/utils/ftl_property.o 00:02:01.869 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:01.869 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:01.869 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:01.869 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:01.869 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:01.869 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:01.869 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:01.869 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:01.869 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:01.869 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:01.869 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:01.869 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:01.869 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:01.869 CC lib/ftl/base/ftl_base_dev.o 00:02:01.869 CC lib/ftl/ftl_trace.o 00:02:01.869 CC lib/ftl/base/ftl_base_bdev.o 00:02:02.437 LIB libspdk_nbd.a 00:02:02.437 SO libspdk_nbd.so.7.0 00:02:02.437 LIB libspdk_scsi.a 00:02:02.695 SYMLINK libspdk_nbd.so 00:02:02.695 SO libspdk_scsi.so.9.0 00:02:02.695 LIB libspdk_ublk.a 00:02:02.695 SYMLINK libspdk_scsi.so 00:02:02.695 SO libspdk_ublk.so.3.0 00:02:02.695 SYMLINK libspdk_ublk.so 00:02:02.952 CC lib/vhost/vhost.o 00:02:02.952 CC lib/vhost/vhost_rpc.o 00:02:02.952 CC lib/vhost/vhost_scsi.o 00:02:02.952 CC lib/vhost/vhost_blk.o 00:02:02.952 CC lib/iscsi/init_grp.o 00:02:02.952 CC lib/iscsi/conn.o 00:02:02.952 CC lib/vhost/rte_vhost_user.o 00:02:02.952 CC lib/iscsi/iscsi.o 00:02:02.952 CC lib/iscsi/portal_grp.o 00:02:02.952 CC lib/iscsi/param.o 00:02:02.952 CC lib/iscsi/tgt_node.o 00:02:02.952 CC lib/iscsi/iscsi_subsystem.o 00:02:02.952 CC lib/iscsi/iscsi_rpc.o 00:02:02.952 CC lib/iscsi/task.o 00:02:02.952 LIB libspdk_ftl.a 00:02:03.209 SO libspdk_ftl.so.9.0 00:02:03.466 SYMLINK libspdk_ftl.so 00:02:03.725 LIB libspdk_nvmf.a 00:02:03.725 SO libspdk_nvmf.so.20.0 00:02:03.725 LIB libspdk_vhost.a 00:02:03.725 SO libspdk_vhost.so.8.0 00:02:03.984 SYMLINK libspdk_vhost.so 00:02:03.984 SYMLINK libspdk_nvmf.so 00:02:03.984 LIB libspdk_iscsi.a 00:02:03.984 SO libspdk_iscsi.so.8.0 00:02:04.242 SYMLINK libspdk_iscsi.so 00:02:04.501 CC module/env_dpdk/env_dpdk_rpc.o 00:02:04.501 CC module/vfu_device/vfu_virtio.o 00:02:04.501 CC module/vfu_device/vfu_virtio_blk.o 00:02:04.501 CC module/vfu_device/vfu_virtio_scsi.o 00:02:04.501 CC module/vfu_device/vfu_virtio_rpc.o 00:02:04.501 CC module/vfu_device/vfu_virtio_fs.o 00:02:04.758 CC module/keyring/file/keyring.o 00:02:04.758 CC module/keyring/file/keyring_rpc.o 00:02:04.758 CC module/keyring/linux/keyring.o 00:02:04.758 CC module/keyring/linux/keyring_rpc.o 00:02:04.758 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:04.758 CC module/sock/posix/posix.o 00:02:04.758 CC module/blob/bdev/blob_bdev.o 00:02:04.758 LIB libspdk_env_dpdk_rpc.a 00:02:04.758 CC module/accel/error/accel_error.o 00:02:04.758 CC module/accel/dsa/accel_dsa.o 00:02:04.758 CC module/accel/dsa/accel_dsa_rpc.o 00:02:04.758 CC module/accel/error/accel_error_rpc.o 00:02:04.758 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:04.759 CC module/scheduler/gscheduler/gscheduler.o 00:02:04.759 CC module/fsdev/aio/fsdev_aio.o 00:02:04.759 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:04.759 CC module/accel/ioat/accel_ioat.o 00:02:04.759 CC module/accel/iaa/accel_iaa.o 00:02:04.759 CC module/accel/ioat/accel_ioat_rpc.o 00:02:04.759 CC module/fsdev/aio/linux_aio_mgr.o 00:02:04.759 CC module/accel/iaa/accel_iaa_rpc.o 00:02:04.759 SO libspdk_env_dpdk_rpc.so.6.0 00:02:04.759 SYMLINK libspdk_env_dpdk_rpc.so 00:02:04.759 LIB libspdk_keyring_file.a 00:02:05.017 LIB libspdk_keyring_linux.a 00:02:05.017 SO libspdk_keyring_file.so.2.0 00:02:05.017 SO libspdk_keyring_linux.so.1.0 00:02:05.017 LIB libspdk_scheduler_dpdk_governor.a 00:02:05.017 LIB libspdk_scheduler_gscheduler.a 00:02:05.017 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:05.017 LIB libspdk_accel_error.a 00:02:05.017 SYMLINK libspdk_keyring_file.so 00:02:05.017 SO libspdk_scheduler_gscheduler.so.4.0 00:02:05.017 SYMLINK libspdk_keyring_linux.so 00:02:05.017 LIB libspdk_accel_iaa.a 00:02:05.017 LIB libspdk_accel_ioat.a 00:02:05.017 LIB libspdk_scheduler_dynamic.a 00:02:05.017 SO libspdk_accel_error.so.2.0 00:02:05.017 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:05.017 SO libspdk_accel_iaa.so.3.0 00:02:05.017 LIB libspdk_blob_bdev.a 00:02:05.017 SO libspdk_accel_ioat.so.6.0 00:02:05.017 SO libspdk_scheduler_dynamic.so.4.0 00:02:05.017 LIB libspdk_accel_dsa.a 00:02:05.017 SYMLINK libspdk_scheduler_gscheduler.so 00:02:05.017 SO libspdk_blob_bdev.so.12.0 00:02:05.017 SYMLINK libspdk_accel_error.so 00:02:05.017 SO libspdk_accel_dsa.so.5.0 00:02:05.017 SYMLINK libspdk_accel_ioat.so 00:02:05.017 SYMLINK libspdk_accel_iaa.so 00:02:05.017 SYMLINK libspdk_scheduler_dynamic.so 00:02:05.017 SYMLINK libspdk_blob_bdev.so 00:02:05.017 SYMLINK libspdk_accel_dsa.so 00:02:05.017 LIB libspdk_vfu_device.a 00:02:05.275 SO libspdk_vfu_device.so.3.0 00:02:05.275 SYMLINK libspdk_vfu_device.so 00:02:05.275 LIB libspdk_fsdev_aio.a 00:02:05.275 LIB libspdk_sock_posix.a 00:02:05.534 SO libspdk_fsdev_aio.so.1.0 00:02:05.534 SO libspdk_sock_posix.so.6.0 00:02:05.534 SYMLINK libspdk_fsdev_aio.so 00:02:05.534 SYMLINK libspdk_sock_posix.so 00:02:05.534 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:05.534 CC module/bdev/delay/vbdev_delay.o 00:02:05.534 CC module/bdev/nvme/bdev_nvme.o 00:02:05.534 CC module/bdev/gpt/gpt.o 00:02:05.534 CC module/bdev/nvme/nvme_rpc.o 00:02:05.534 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:05.534 CC module/bdev/gpt/vbdev_gpt.o 00:02:05.534 CC module/bdev/nvme/bdev_mdns_client.o 00:02:05.534 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:05.534 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:05.534 CC module/bdev/nvme/vbdev_opal.o 00:02:05.534 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:05.534 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:05.534 CC module/bdev/null/bdev_null.o 00:02:05.534 CC module/bdev/null/bdev_null_rpc.o 00:02:05.534 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:05.534 CC module/bdev/lvol/vbdev_lvol.o 00:02:05.534 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:05.534 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:05.534 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:05.534 CC module/bdev/aio/bdev_aio.o 00:02:05.534 CC module/blobfs/bdev/blobfs_bdev.o 00:02:05.534 CC module/bdev/error/vbdev_error.o 00:02:05.534 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:05.534 CC module/bdev/error/vbdev_error_rpc.o 00:02:05.534 CC module/bdev/aio/bdev_aio_rpc.o 00:02:05.534 CC module/bdev/iscsi/bdev_iscsi.o 00:02:05.534 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:05.534 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:05.534 CC module/bdev/ftl/bdev_ftl.o 00:02:05.534 CC module/bdev/split/vbdev_split.o 00:02:05.534 CC module/bdev/split/vbdev_split_rpc.o 00:02:05.534 CC module/bdev/malloc/bdev_malloc.o 00:02:05.534 CC module/bdev/passthru/vbdev_passthru.o 00:02:05.534 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:05.534 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:05.534 CC module/bdev/raid/bdev_raid.o 00:02:05.534 CC module/bdev/raid/bdev_raid_sb.o 00:02:05.534 CC module/bdev/raid/bdev_raid_rpc.o 00:02:05.534 CC module/bdev/raid/raid0.o 00:02:05.535 CC module/bdev/raid/raid1.o 00:02:05.535 CC module/bdev/raid/concat.o 00:02:05.793 LIB libspdk_blobfs_bdev.a 00:02:05.793 LIB libspdk_bdev_null.a 00:02:05.793 SO libspdk_blobfs_bdev.so.6.0 00:02:05.793 LIB libspdk_bdev_error.a 00:02:05.793 LIB libspdk_bdev_split.a 00:02:05.793 SO libspdk_bdev_null.so.6.0 00:02:05.793 SO libspdk_bdev_error.so.6.0 00:02:05.793 LIB libspdk_bdev_gpt.a 00:02:05.793 SO libspdk_bdev_split.so.6.0 00:02:05.793 LIB libspdk_bdev_zone_block.a 00:02:05.793 SYMLINK libspdk_blobfs_bdev.so 00:02:05.793 LIB libspdk_bdev_delay.a 00:02:05.793 LIB libspdk_bdev_ftl.a 00:02:05.793 LIB libspdk_bdev_passthru.a 00:02:05.793 SO libspdk_bdev_gpt.so.6.0 00:02:06.051 SO libspdk_bdev_zone_block.so.6.0 00:02:06.051 SYMLINK libspdk_bdev_error.so 00:02:06.051 LIB libspdk_bdev_aio.a 00:02:06.051 SYMLINK libspdk_bdev_null.so 00:02:06.051 SO libspdk_bdev_delay.so.6.0 00:02:06.051 SO libspdk_bdev_passthru.so.6.0 00:02:06.051 SYMLINK libspdk_bdev_split.so 00:02:06.052 SO libspdk_bdev_ftl.so.6.0 00:02:06.052 SO libspdk_bdev_aio.so.6.0 00:02:06.052 LIB libspdk_bdev_malloc.a 00:02:06.052 LIB libspdk_bdev_iscsi.a 00:02:06.052 SYMLINK libspdk_bdev_zone_block.so 00:02:06.052 SYMLINK libspdk_bdev_gpt.so 00:02:06.052 SO libspdk_bdev_malloc.so.6.0 00:02:06.052 SYMLINK libspdk_bdev_passthru.so 00:02:06.052 SYMLINK libspdk_bdev_aio.so 00:02:06.052 SO libspdk_bdev_iscsi.so.6.0 00:02:06.052 SYMLINK libspdk_bdev_delay.so 00:02:06.052 SYMLINK libspdk_bdev_ftl.so 00:02:06.052 LIB libspdk_bdev_lvol.a 00:02:06.052 SYMLINK libspdk_bdev_malloc.so 00:02:06.052 SYMLINK libspdk_bdev_iscsi.so 00:02:06.052 SO libspdk_bdev_lvol.so.6.0 00:02:06.052 LIB libspdk_bdev_virtio.a 00:02:06.052 SO libspdk_bdev_virtio.so.6.0 00:02:06.052 SYMLINK libspdk_bdev_lvol.so 00:02:06.310 SYMLINK libspdk_bdev_virtio.so 00:02:06.310 LIB libspdk_bdev_raid.a 00:02:06.569 SO libspdk_bdev_raid.so.6.0 00:02:06.569 SYMLINK libspdk_bdev_raid.so 00:02:07.505 LIB libspdk_bdev_nvme.a 00:02:07.505 SO libspdk_bdev_nvme.so.7.1 00:02:07.505 SYMLINK libspdk_bdev_nvme.so 00:02:08.072 CC module/event/subsystems/iobuf/iobuf.o 00:02:08.072 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:08.072 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:08.072 CC module/event/subsystems/sock/sock.o 00:02:08.072 CC module/event/subsystems/fsdev/fsdev.o 00:02:08.072 CC module/event/subsystems/keyring/keyring.o 00:02:08.072 CC module/event/subsystems/scheduler/scheduler.o 00:02:08.072 CC module/event/subsystems/vmd/vmd.o 00:02:08.072 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:08.072 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:08.331 LIB libspdk_event_vfu_tgt.a 00:02:08.331 LIB libspdk_event_fsdev.a 00:02:08.331 LIB libspdk_event_sock.a 00:02:08.331 LIB libspdk_event_iobuf.a 00:02:08.331 LIB libspdk_event_scheduler.a 00:02:08.331 LIB libspdk_event_keyring.a 00:02:08.331 LIB libspdk_event_vhost_blk.a 00:02:08.331 SO libspdk_event_vfu_tgt.so.3.0 00:02:08.331 LIB libspdk_event_vmd.a 00:02:08.331 SO libspdk_event_fsdev.so.1.0 00:02:08.331 SO libspdk_event_keyring.so.1.0 00:02:08.331 SO libspdk_event_sock.so.5.0 00:02:08.331 SO libspdk_event_iobuf.so.3.0 00:02:08.331 SO libspdk_event_scheduler.so.4.0 00:02:08.331 SO libspdk_event_vhost_blk.so.3.0 00:02:08.331 SO libspdk_event_vmd.so.6.0 00:02:08.331 SYMLINK libspdk_event_fsdev.so 00:02:08.331 SYMLINK libspdk_event_vfu_tgt.so 00:02:08.331 SYMLINK libspdk_event_keyring.so 00:02:08.331 SYMLINK libspdk_event_sock.so 00:02:08.331 SYMLINK libspdk_event_iobuf.so 00:02:08.331 SYMLINK libspdk_event_scheduler.so 00:02:08.331 SYMLINK libspdk_event_vhost_blk.so 00:02:08.331 SYMLINK libspdk_event_vmd.so 00:02:08.590 CC module/event/subsystems/accel/accel.o 00:02:08.849 LIB libspdk_event_accel.a 00:02:08.849 SO libspdk_event_accel.so.6.0 00:02:08.849 SYMLINK libspdk_event_accel.so 00:02:09.108 CC module/event/subsystems/bdev/bdev.o 00:02:09.366 LIB libspdk_event_bdev.a 00:02:09.366 SO libspdk_event_bdev.so.6.0 00:02:09.366 SYMLINK libspdk_event_bdev.so 00:02:09.933 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:09.933 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:09.933 CC module/event/subsystems/ublk/ublk.o 00:02:09.933 CC module/event/subsystems/nbd/nbd.o 00:02:09.933 CC module/event/subsystems/scsi/scsi.o 00:02:09.933 LIB libspdk_event_ublk.a 00:02:09.933 LIB libspdk_event_nbd.a 00:02:09.933 LIB libspdk_event_scsi.a 00:02:09.933 SO libspdk_event_ublk.so.3.0 00:02:09.933 LIB libspdk_event_nvmf.a 00:02:09.933 SO libspdk_event_nbd.so.6.0 00:02:09.933 SO libspdk_event_scsi.so.6.0 00:02:09.933 SO libspdk_event_nvmf.so.6.0 00:02:09.933 SYMLINK libspdk_event_ublk.so 00:02:09.933 SYMLINK libspdk_event_nbd.so 00:02:09.933 SYMLINK libspdk_event_scsi.so 00:02:09.933 SYMLINK libspdk_event_nvmf.so 00:02:10.500 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:10.500 CC module/event/subsystems/iscsi/iscsi.o 00:02:10.500 LIB libspdk_event_vhost_scsi.a 00:02:10.500 SO libspdk_event_vhost_scsi.so.3.0 00:02:10.500 LIB libspdk_event_iscsi.a 00:02:10.500 SYMLINK libspdk_event_vhost_scsi.so 00:02:10.500 SO libspdk_event_iscsi.so.6.0 00:02:10.500 SYMLINK libspdk_event_iscsi.so 00:02:10.759 SO libspdk.so.6.0 00:02:10.759 SYMLINK libspdk.so 00:02:11.017 CC app/spdk_nvme_identify/identify.o 00:02:11.017 CC app/trace_record/trace_record.o 00:02:11.017 CXX app/trace/trace.o 00:02:11.017 CC app/spdk_lspci/spdk_lspci.o 00:02:11.017 CC app/spdk_nvme_perf/perf.o 00:02:11.017 CC app/spdk_top/spdk_top.o 00:02:11.017 CC app/spdk_nvme_discover/discovery_aer.o 00:02:11.017 CC test/rpc_client/rpc_client_test.o 00:02:11.017 TEST_HEADER include/spdk/accel.h 00:02:11.017 TEST_HEADER include/spdk/accel_module.h 00:02:11.017 TEST_HEADER include/spdk/assert.h 00:02:11.017 TEST_HEADER include/spdk/barrier.h 00:02:11.017 TEST_HEADER include/spdk/bdev.h 00:02:11.017 TEST_HEADER include/spdk/base64.h 00:02:11.017 TEST_HEADER include/spdk/bdev_module.h 00:02:11.017 TEST_HEADER include/spdk/bdev_zone.h 00:02:11.017 TEST_HEADER include/spdk/bit_pool.h 00:02:11.017 TEST_HEADER include/spdk/bit_array.h 00:02:11.017 TEST_HEADER include/spdk/blob_bdev.h 00:02:11.017 TEST_HEADER include/spdk/blobfs.h 00:02:11.017 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:11.017 TEST_HEADER include/spdk/conf.h 00:02:11.017 TEST_HEADER include/spdk/cpuset.h 00:02:11.017 TEST_HEADER include/spdk/blob.h 00:02:11.017 TEST_HEADER include/spdk/config.h 00:02:11.017 TEST_HEADER include/spdk/crc16.h 00:02:11.017 TEST_HEADER include/spdk/crc32.h 00:02:11.017 TEST_HEADER include/spdk/crc64.h 00:02:11.017 TEST_HEADER include/spdk/dif.h 00:02:11.017 TEST_HEADER include/spdk/endian.h 00:02:11.017 TEST_HEADER include/spdk/dma.h 00:02:11.017 TEST_HEADER include/spdk/env_dpdk.h 00:02:11.017 TEST_HEADER include/spdk/env.h 00:02:11.017 CC app/spdk_dd/spdk_dd.o 00:02:11.017 TEST_HEADER include/spdk/event.h 00:02:11.017 CC app/iscsi_tgt/iscsi_tgt.o 00:02:11.017 TEST_HEADER include/spdk/fd.h 00:02:11.017 TEST_HEADER include/spdk/fd_group.h 00:02:11.017 TEST_HEADER include/spdk/file.h 00:02:11.017 TEST_HEADER include/spdk/fsdev_module.h 00:02:11.017 TEST_HEADER include/spdk/fsdev.h 00:02:11.017 TEST_HEADER include/spdk/gpt_spec.h 00:02:11.017 TEST_HEADER include/spdk/hexlify.h 00:02:11.017 TEST_HEADER include/spdk/ftl.h 00:02:11.017 TEST_HEADER include/spdk/histogram_data.h 00:02:11.017 TEST_HEADER include/spdk/idxd.h 00:02:11.017 TEST_HEADER include/spdk/init.h 00:02:11.017 CC app/nvmf_tgt/nvmf_main.o 00:02:11.017 TEST_HEADER include/spdk/idxd_spec.h 00:02:11.017 TEST_HEADER include/spdk/ioat_spec.h 00:02:11.017 TEST_HEADER include/spdk/ioat.h 00:02:11.017 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:11.017 TEST_HEADER include/spdk/json.h 00:02:11.017 TEST_HEADER include/spdk/iscsi_spec.h 00:02:11.017 TEST_HEADER include/spdk/jsonrpc.h 00:02:11.017 TEST_HEADER include/spdk/keyring.h 00:02:11.017 TEST_HEADER include/spdk/keyring_module.h 00:02:11.017 TEST_HEADER include/spdk/log.h 00:02:11.017 TEST_HEADER include/spdk/likely.h 00:02:11.017 TEST_HEADER include/spdk/md5.h 00:02:11.017 TEST_HEADER include/spdk/lvol.h 00:02:11.017 TEST_HEADER include/spdk/memory.h 00:02:11.017 TEST_HEADER include/spdk/mmio.h 00:02:11.017 TEST_HEADER include/spdk/nbd.h 00:02:11.017 TEST_HEADER include/spdk/notify.h 00:02:11.017 TEST_HEADER include/spdk/net.h 00:02:11.017 TEST_HEADER include/spdk/nvme.h 00:02:11.017 TEST_HEADER include/spdk/nvme_intel.h 00:02:11.017 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:11.017 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:11.017 TEST_HEADER include/spdk/nvme_spec.h 00:02:11.017 TEST_HEADER include/spdk/nvme_zns.h 00:02:11.017 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:11.017 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:11.017 TEST_HEADER include/spdk/nvmf.h 00:02:11.017 TEST_HEADER include/spdk/nvmf_spec.h 00:02:11.017 TEST_HEADER include/spdk/opal.h 00:02:11.017 TEST_HEADER include/spdk/nvmf_transport.h 00:02:11.017 TEST_HEADER include/spdk/pci_ids.h 00:02:11.017 TEST_HEADER include/spdk/opal_spec.h 00:02:11.287 TEST_HEADER include/spdk/pipe.h 00:02:11.287 TEST_HEADER include/spdk/queue.h 00:02:11.287 TEST_HEADER include/spdk/reduce.h 00:02:11.287 TEST_HEADER include/spdk/rpc.h 00:02:11.287 TEST_HEADER include/spdk/scheduler.h 00:02:11.287 CC app/spdk_tgt/spdk_tgt.o 00:02:11.287 TEST_HEADER include/spdk/scsi.h 00:02:11.287 TEST_HEADER include/spdk/scsi_spec.h 00:02:11.287 TEST_HEADER include/spdk/stdinc.h 00:02:11.287 TEST_HEADER include/spdk/sock.h 00:02:11.287 TEST_HEADER include/spdk/thread.h 00:02:11.287 TEST_HEADER include/spdk/trace.h 00:02:11.287 TEST_HEADER include/spdk/string.h 00:02:11.287 TEST_HEADER include/spdk/trace_parser.h 00:02:11.287 TEST_HEADER include/spdk/tree.h 00:02:11.287 TEST_HEADER include/spdk/ublk.h 00:02:11.287 TEST_HEADER include/spdk/uuid.h 00:02:11.287 TEST_HEADER include/spdk/version.h 00:02:11.287 TEST_HEADER include/spdk/util.h 00:02:11.287 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:11.287 TEST_HEADER include/spdk/vhost.h 00:02:11.287 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:11.287 TEST_HEADER include/spdk/vmd.h 00:02:11.287 TEST_HEADER include/spdk/xor.h 00:02:11.287 TEST_HEADER include/spdk/zipf.h 00:02:11.287 CXX test/cpp_headers/accel.o 00:02:11.287 CXX test/cpp_headers/accel_module.o 00:02:11.287 CXX test/cpp_headers/assert.o 00:02:11.287 CXX test/cpp_headers/barrier.o 00:02:11.287 CXX test/cpp_headers/base64.o 00:02:11.287 CXX test/cpp_headers/bdev.o 00:02:11.287 CXX test/cpp_headers/bdev_zone.o 00:02:11.287 CXX test/cpp_headers/bdev_module.o 00:02:11.287 CXX test/cpp_headers/blob_bdev.o 00:02:11.287 CXX test/cpp_headers/bit_pool.o 00:02:11.287 CXX test/cpp_headers/bit_array.o 00:02:11.287 CXX test/cpp_headers/blobfs_bdev.o 00:02:11.287 CXX test/cpp_headers/blobfs.o 00:02:11.287 CXX test/cpp_headers/conf.o 00:02:11.287 CXX test/cpp_headers/blob.o 00:02:11.287 CXX test/cpp_headers/config.o 00:02:11.287 CXX test/cpp_headers/cpuset.o 00:02:11.287 CXX test/cpp_headers/crc16.o 00:02:11.287 CXX test/cpp_headers/crc64.o 00:02:11.287 CXX test/cpp_headers/dif.o 00:02:11.287 CXX test/cpp_headers/crc32.o 00:02:11.287 CXX test/cpp_headers/dma.o 00:02:11.287 CXX test/cpp_headers/env_dpdk.o 00:02:11.287 CXX test/cpp_headers/env.o 00:02:11.287 CXX test/cpp_headers/fd_group.o 00:02:11.287 CXX test/cpp_headers/endian.o 00:02:11.287 CXX test/cpp_headers/event.o 00:02:11.287 CXX test/cpp_headers/fd.o 00:02:11.287 CXX test/cpp_headers/fsdev.o 00:02:11.287 CXX test/cpp_headers/file.o 00:02:11.287 CXX test/cpp_headers/fsdev_module.o 00:02:11.287 CXX test/cpp_headers/gpt_spec.o 00:02:11.287 CXX test/cpp_headers/ftl.o 00:02:11.287 CXX test/cpp_headers/hexlify.o 00:02:11.287 CXX test/cpp_headers/idxd.o 00:02:11.287 CXX test/cpp_headers/init.o 00:02:11.287 CXX test/cpp_headers/histogram_data.o 00:02:11.287 CXX test/cpp_headers/idxd_spec.o 00:02:11.287 CXX test/cpp_headers/ioat_spec.o 00:02:11.287 CXX test/cpp_headers/iscsi_spec.o 00:02:11.287 CXX test/cpp_headers/keyring.o 00:02:11.287 CXX test/cpp_headers/ioat.o 00:02:11.287 CXX test/cpp_headers/jsonrpc.o 00:02:11.287 CXX test/cpp_headers/keyring_module.o 00:02:11.287 CXX test/cpp_headers/json.o 00:02:11.287 CXX test/cpp_headers/likely.o 00:02:11.287 CXX test/cpp_headers/lvol.o 00:02:11.287 CXX test/cpp_headers/log.o 00:02:11.287 CXX test/cpp_headers/md5.o 00:02:11.287 CXX test/cpp_headers/memory.o 00:02:11.287 CXX test/cpp_headers/mmio.o 00:02:11.287 CXX test/cpp_headers/nbd.o 00:02:11.287 CXX test/cpp_headers/net.o 00:02:11.287 CXX test/cpp_headers/notify.o 00:02:11.287 CXX test/cpp_headers/nvme.o 00:02:11.287 CXX test/cpp_headers/nvme_ocssd.o 00:02:11.287 CXX test/cpp_headers/nvme_intel.o 00:02:11.287 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:11.287 CXX test/cpp_headers/nvme_spec.o 00:02:11.287 CXX test/cpp_headers/nvme_zns.o 00:02:11.287 CXX test/cpp_headers/nvmf_cmd.o 00:02:11.287 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:11.287 CXX test/cpp_headers/nvmf.o 00:02:11.287 CXX test/cpp_headers/nvmf_spec.o 00:02:11.287 CXX test/cpp_headers/nvmf_transport.o 00:02:11.287 CC examples/ioat/perf/perf.o 00:02:11.287 CXX test/cpp_headers/opal.o 00:02:11.287 CC test/thread/poller_perf/poller_perf.o 00:02:11.287 CC examples/ioat/verify/verify.o 00:02:11.287 CXX test/cpp_headers/opal_spec.o 00:02:11.287 CC examples/util/zipf/zipf.o 00:02:11.287 CC test/app/jsoncat/jsoncat.o 00:02:11.287 CC test/env/pci/pci_ut.o 00:02:11.287 CXX test/cpp_headers/pci_ids.o 00:02:11.287 CC app/fio/nvme/fio_plugin.o 00:02:11.287 CC test/app/histogram_perf/histogram_perf.o 00:02:11.287 CC test/app/stub/stub.o 00:02:11.287 CC test/dma/test_dma/test_dma.o 00:02:11.287 CC test/env/memory/memory_ut.o 00:02:11.287 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:11.287 CC test/env/vtophys/vtophys.o 00:02:11.287 CC app/fio/bdev/fio_plugin.o 00:02:11.287 CC test/app/bdev_svc/bdev_svc.o 00:02:11.556 LINK spdk_lspci 00:02:11.556 LINK rpc_client_test 00:02:11.556 LINK spdk_trace_record 00:02:11.556 LINK interrupt_tgt 00:02:11.556 LINK iscsi_tgt 00:02:11.556 LINK spdk_nvme_discover 00:02:11.817 LINK nvmf_tgt 00:02:11.817 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:11.817 CC test/env/mem_callbacks/mem_callbacks.o 00:02:11.817 LINK poller_perf 00:02:11.817 LINK jsoncat 00:02:11.817 CXX test/cpp_headers/pipe.o 00:02:11.817 LINK vtophys 00:02:11.817 CXX test/cpp_headers/queue.o 00:02:11.817 CXX test/cpp_headers/reduce.o 00:02:11.817 CXX test/cpp_headers/rpc.o 00:02:11.817 CXX test/cpp_headers/scheduler.o 00:02:11.817 CXX test/cpp_headers/scsi.o 00:02:11.817 CXX test/cpp_headers/scsi_spec.o 00:02:11.817 CXX test/cpp_headers/sock.o 00:02:11.817 CXX test/cpp_headers/stdinc.o 00:02:11.817 CXX test/cpp_headers/string.o 00:02:11.817 CXX test/cpp_headers/thread.o 00:02:11.817 CXX test/cpp_headers/trace.o 00:02:11.817 LINK ioat_perf 00:02:11.817 CXX test/cpp_headers/trace_parser.o 00:02:11.817 CXX test/cpp_headers/tree.o 00:02:11.817 LINK histogram_perf 00:02:11.817 CXX test/cpp_headers/ublk.o 00:02:11.817 CXX test/cpp_headers/util.o 00:02:11.817 LINK stub 00:02:11.817 CXX test/cpp_headers/uuid.o 00:02:11.817 CXX test/cpp_headers/version.o 00:02:11.817 CXX test/cpp_headers/vfio_user_pci.o 00:02:11.817 CXX test/cpp_headers/vfio_user_spec.o 00:02:11.817 CXX test/cpp_headers/vhost.o 00:02:11.817 LINK verify 00:02:11.817 CXX test/cpp_headers/vmd.o 00:02:11.817 CXX test/cpp_headers/xor.o 00:02:11.817 LINK zipf 00:02:11.817 CXX test/cpp_headers/zipf.o 00:02:11.817 LINK bdev_svc 00:02:11.817 LINK spdk_tgt 00:02:12.075 LINK env_dpdk_post_init 00:02:12.075 LINK spdk_dd 00:02:12.075 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:12.075 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:12.075 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:12.075 LINK pci_ut 00:02:12.075 LINK spdk_trace 00:02:12.334 LINK spdk_bdev 00:02:12.334 CC test/event/reactor_perf/reactor_perf.o 00:02:12.334 LINK nvme_fuzz 00:02:12.334 CC test/event/event_perf/event_perf.o 00:02:12.334 CC test/event/reactor/reactor.o 00:02:12.334 CC examples/sock/hello_world/hello_sock.o 00:02:12.334 CC examples/idxd/perf/perf.o 00:02:12.334 LINK spdk_nvme 00:02:12.334 CC test/event/app_repeat/app_repeat.o 00:02:12.334 CC examples/vmd/led/led.o 00:02:12.334 CC examples/vmd/lsvmd/lsvmd.o 00:02:12.334 CC test/event/scheduler/scheduler.o 00:02:12.334 LINK test_dma 00:02:12.334 LINK spdk_nvme_perf 00:02:12.334 CC examples/thread/thread/thread_ex.o 00:02:12.334 LINK vhost_fuzz 00:02:12.334 LINK reactor_perf 00:02:12.334 LINK spdk_top 00:02:12.334 LINK mem_callbacks 00:02:12.591 LINK spdk_nvme_identify 00:02:12.591 LINK reactor 00:02:12.591 LINK event_perf 00:02:12.591 CC app/vhost/vhost.o 00:02:12.591 LINK lsvmd 00:02:12.591 LINK led 00:02:12.591 LINK app_repeat 00:02:12.591 LINK hello_sock 00:02:12.591 LINK scheduler 00:02:12.591 LINK thread 00:02:12.591 LINK idxd_perf 00:02:12.849 LINK vhost 00:02:12.849 LINK memory_ut 00:02:12.849 CC test/nvme/reserve/reserve.o 00:02:12.849 CC test/nvme/fdp/fdp.o 00:02:12.849 CC test/nvme/err_injection/err_injection.o 00:02:12.849 CC test/nvme/overhead/overhead.o 00:02:12.849 CC test/nvme/reset/reset.o 00:02:12.849 CC test/nvme/e2edp/nvme_dp.o 00:02:12.849 CC test/nvme/cuse/cuse.o 00:02:12.849 CC test/nvme/sgl/sgl.o 00:02:12.849 CC test/nvme/startup/startup.o 00:02:12.849 CC test/nvme/aer/aer.o 00:02:12.849 CC test/nvme/compliance/nvme_compliance.o 00:02:12.849 CC test/nvme/simple_copy/simple_copy.o 00:02:12.849 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:12.849 CC test/nvme/fused_ordering/fused_ordering.o 00:02:12.849 CC test/nvme/boot_partition/boot_partition.o 00:02:12.849 CC test/nvme/connect_stress/connect_stress.o 00:02:12.849 CC test/accel/dif/dif.o 00:02:12.849 CC test/blobfs/mkfs/mkfs.o 00:02:13.107 CC examples/nvme/hotplug/hotplug.o 00:02:13.107 CC examples/nvme/reconnect/reconnect.o 00:02:13.107 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:13.107 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:13.107 CC examples/nvme/arbitration/arbitration.o 00:02:13.107 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:13.107 CC examples/nvme/abort/abort.o 00:02:13.107 CC examples/nvme/hello_world/hello_world.o 00:02:13.107 CC test/lvol/esnap/esnap.o 00:02:13.107 LINK err_injection 00:02:13.107 LINK boot_partition 00:02:13.107 LINK reserve 00:02:13.107 CC examples/accel/perf/accel_perf.o 00:02:13.107 LINK startup 00:02:13.107 LINK connect_stress 00:02:13.107 LINK doorbell_aers 00:02:13.107 LINK simple_copy 00:02:13.107 CC examples/blob/cli/blobcli.o 00:02:13.107 CC examples/blob/hello_world/hello_blob.o 00:02:13.107 LINK sgl 00:02:13.107 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:13.107 LINK mkfs 00:02:13.107 LINK nvme_dp 00:02:13.107 LINK reset 00:02:13.107 LINK overhead 00:02:13.107 LINK pmr_persistence 00:02:13.107 LINK fdp 00:02:13.107 LINK fused_ordering 00:02:13.107 LINK aer 00:02:13.107 LINK cmb_copy 00:02:13.107 LINK nvme_compliance 00:02:13.107 LINK hello_world 00:02:13.364 LINK hotplug 00:02:13.364 LINK arbitration 00:02:13.364 LINK reconnect 00:02:13.364 LINK abort 00:02:13.364 LINK nvme_manage 00:02:13.364 LINK hello_blob 00:02:13.364 LINK hello_fsdev 00:02:13.364 LINK iscsi_fuzz 00:02:13.621 LINK dif 00:02:13.621 LINK accel_perf 00:02:13.621 LINK blobcli 00:02:13.879 LINK cuse 00:02:14.138 CC examples/bdev/hello_world/hello_bdev.o 00:02:14.138 CC examples/bdev/bdevperf/bdevperf.o 00:02:14.138 CC test/bdev/bdevio/bdevio.o 00:02:14.138 LINK hello_bdev 00:02:14.396 LINK bdevio 00:02:14.654 LINK bdevperf 00:02:15.220 CC examples/nvmf/nvmf/nvmf.o 00:02:15.478 LINK nvmf 00:02:16.853 LINK esnap 00:02:16.853 00:02:16.853 real 0m55.052s 00:02:16.853 user 8m20.986s 00:02:16.853 sys 3m38.029s 00:02:16.853 09:13:29 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:16.853 09:13:29 make -- common/autotest_common.sh@10 -- $ set +x 00:02:16.853 ************************************ 00:02:16.853 END TEST make 00:02:16.853 ************************************ 00:02:16.853 09:13:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:16.853 09:13:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:16.853 09:13:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:16.853 09:13:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.853 09:13:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:16.853 09:13:29 -- pm/common@44 -- $ pid=3052707 00:02:16.853 09:13:29 -- pm/common@50 -- $ kill -TERM 3052707 00:02:16.853 09:13:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.853 09:13:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:16.853 09:13:29 -- pm/common@44 -- $ pid=3052708 00:02:16.853 09:13:29 -- pm/common@50 -- $ kill -TERM 3052708 00:02:16.853 09:13:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.853 09:13:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:16.853 09:13:29 -- pm/common@44 -- $ pid=3052712 00:02:16.853 09:13:29 -- pm/common@50 -- $ kill -TERM 3052712 00:02:16.853 09:13:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.853 09:13:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:16.853 09:13:29 -- pm/common@44 -- $ pid=3052737 00:02:16.853 09:13:29 -- pm/common@50 -- $ sudo -E kill -TERM 3052737 00:02:16.853 09:13:29 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:16.853 09:13:29 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:17.112 09:13:29 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:17.112 09:13:29 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:17.112 09:13:29 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:17.112 09:13:29 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:17.112 09:13:29 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:17.112 09:13:29 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:17.112 09:13:29 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:17.112 09:13:29 -- scripts/common.sh@336 -- # IFS=.-: 00:02:17.112 09:13:29 -- scripts/common.sh@336 -- # read -ra ver1 00:02:17.112 09:13:29 -- scripts/common.sh@337 -- # IFS=.-: 00:02:17.112 09:13:29 -- scripts/common.sh@337 -- # read -ra ver2 00:02:17.112 09:13:29 -- scripts/common.sh@338 -- # local 'op=<' 00:02:17.112 09:13:29 -- scripts/common.sh@340 -- # ver1_l=2 00:02:17.112 09:13:29 -- scripts/common.sh@341 -- # ver2_l=1 00:02:17.112 09:13:29 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:17.112 09:13:29 -- scripts/common.sh@344 -- # case "$op" in 00:02:17.112 09:13:29 -- scripts/common.sh@345 -- # : 1 00:02:17.112 09:13:29 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:17.112 09:13:29 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:17.112 09:13:29 -- scripts/common.sh@365 -- # decimal 1 00:02:17.112 09:13:29 -- scripts/common.sh@353 -- # local d=1 00:02:17.112 09:13:29 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:17.112 09:13:29 -- scripts/common.sh@355 -- # echo 1 00:02:17.112 09:13:29 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:17.112 09:13:29 -- scripts/common.sh@366 -- # decimal 2 00:02:17.112 09:13:29 -- scripts/common.sh@353 -- # local d=2 00:02:17.112 09:13:29 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:17.113 09:13:29 -- scripts/common.sh@355 -- # echo 2 00:02:17.113 09:13:29 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:17.113 09:13:29 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:17.113 09:13:29 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:17.113 09:13:29 -- scripts/common.sh@368 -- # return 0 00:02:17.113 09:13:29 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:17.113 09:13:29 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:17.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:17.113 --rc genhtml_branch_coverage=1 00:02:17.113 --rc genhtml_function_coverage=1 00:02:17.113 --rc genhtml_legend=1 00:02:17.113 --rc geninfo_all_blocks=1 00:02:17.113 --rc geninfo_unexecuted_blocks=1 00:02:17.113 00:02:17.113 ' 00:02:17.113 09:13:29 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:17.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:17.113 --rc genhtml_branch_coverage=1 00:02:17.113 --rc genhtml_function_coverage=1 00:02:17.113 --rc genhtml_legend=1 00:02:17.113 --rc geninfo_all_blocks=1 00:02:17.113 --rc geninfo_unexecuted_blocks=1 00:02:17.113 00:02:17.113 ' 00:02:17.113 09:13:29 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:17.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:17.113 --rc genhtml_branch_coverage=1 00:02:17.113 --rc genhtml_function_coverage=1 00:02:17.113 --rc genhtml_legend=1 00:02:17.113 --rc geninfo_all_blocks=1 00:02:17.113 --rc geninfo_unexecuted_blocks=1 00:02:17.113 00:02:17.113 ' 00:02:17.113 09:13:29 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:17.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:17.113 --rc genhtml_branch_coverage=1 00:02:17.113 --rc genhtml_function_coverage=1 00:02:17.113 --rc genhtml_legend=1 00:02:17.113 --rc geninfo_all_blocks=1 00:02:17.113 --rc geninfo_unexecuted_blocks=1 00:02:17.113 00:02:17.113 ' 00:02:17.113 09:13:29 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:17.113 09:13:29 -- nvmf/common.sh@7 -- # uname -s 00:02:17.113 09:13:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:17.113 09:13:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:17.113 09:13:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:17.113 09:13:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:17.113 09:13:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:17.113 09:13:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:17.113 09:13:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:17.113 09:13:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:17.113 09:13:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:17.113 09:13:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:17.113 09:13:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:02:17.113 09:13:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:02:17.113 09:13:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:17.113 09:13:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:17.113 09:13:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:17.113 09:13:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:17.113 09:13:29 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:17.113 09:13:29 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:17.113 09:13:29 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:17.113 09:13:29 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:17.113 09:13:29 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:17.113 09:13:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.113 09:13:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.113 09:13:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.113 09:13:29 -- paths/export.sh@5 -- # export PATH 00:02:17.113 09:13:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.113 09:13:29 -- nvmf/common.sh@51 -- # : 0 00:02:17.113 09:13:29 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:17.113 09:13:29 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:17.113 09:13:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:17.113 09:13:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:17.113 09:13:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:17.113 09:13:29 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:17.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:17.113 09:13:29 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:17.113 09:13:29 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:17.113 09:13:29 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:17.113 09:13:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:17.113 09:13:29 -- spdk/autotest.sh@32 -- # uname -s 00:02:17.113 09:13:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:17.113 09:13:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:17.113 09:13:29 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:17.113 09:13:29 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:17.113 09:13:29 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:17.113 09:13:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:17.113 09:13:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:17.113 09:13:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:17.113 09:13:29 -- spdk/autotest.sh@48 -- # udevadm_pid=3115132 00:02:17.113 09:13:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:17.113 09:13:29 -- pm/common@17 -- # local monitor 00:02:17.113 09:13:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.113 09:13:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.113 09:13:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:17.113 09:13:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.113 09:13:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.113 09:13:29 -- pm/common@25 -- # sleep 1 00:02:17.113 09:13:29 -- pm/common@21 -- # date +%s 00:02:17.113 09:13:29 -- pm/common@21 -- # date +%s 00:02:17.113 09:13:29 -- pm/common@21 -- # date +%s 00:02:17.113 09:13:29 -- pm/common@21 -- # date +%s 00:02:17.113 09:13:29 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734077609 00:02:17.113 09:13:29 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734077609 00:02:17.113 09:13:29 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734077609 00:02:17.113 09:13:29 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734077609 00:02:17.113 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734077609_collect-cpu-load.pm.log 00:02:17.113 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734077609_collect-vmstat.pm.log 00:02:17.113 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734077609_collect-cpu-temp.pm.log 00:02:17.113 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734077609_collect-bmc-pm.bmc.pm.log 00:02:18.050 09:13:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:18.050 09:13:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:18.050 09:13:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:18.050 09:13:30 -- common/autotest_common.sh@10 -- # set +x 00:02:18.050 09:13:30 -- spdk/autotest.sh@59 -- # create_test_list 00:02:18.050 09:13:30 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:18.050 09:13:30 -- common/autotest_common.sh@10 -- # set +x 00:02:18.051 09:13:30 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:18.051 09:13:30 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.051 09:13:30 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.051 09:13:30 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:18.051 09:13:30 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.051 09:13:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:18.051 09:13:30 -- common/autotest_common.sh@1457 -- # uname 00:02:18.051 09:13:30 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:18.051 09:13:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:18.051 09:13:30 -- common/autotest_common.sh@1477 -- # uname 00:02:18.051 09:13:30 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:18.051 09:13:30 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:18.051 09:13:30 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:18.310 lcov: LCOV version 1.15 00:02:18.310 09:13:30 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:36.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:36.493 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:43.060 09:13:54 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:43.060 09:13:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:43.060 09:13:54 -- common/autotest_common.sh@10 -- # set +x 00:02:43.060 09:13:54 -- spdk/autotest.sh@78 -- # rm -f 00:02:43.060 09:13:54 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.962 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:44.962 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:44.962 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:44.962 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:44.962 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:44.962 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:44.962 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:44.962 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:44.962 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:44.962 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:45.220 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:45.220 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:45.220 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:45.220 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:45.221 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:45.221 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:45.221 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:45.221 09:13:57 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:45.221 09:13:57 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:45.221 09:13:57 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:45.221 09:13:57 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:02:45.221 09:13:57 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:02:45.221 09:13:57 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:02:45.221 09:13:57 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:02:45.221 09:13:57 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:02:45.221 09:13:57 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:02:45.221 09:13:57 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:45.221 09:13:57 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:45.221 09:13:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:45.221 09:13:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:45.221 09:13:57 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:45.221 09:13:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:45.221 09:13:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:45.221 09:13:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:45.221 09:13:57 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:45.221 09:13:57 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:45.221 No valid GPT data, bailing 00:02:45.479 09:13:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:45.479 09:13:57 -- scripts/common.sh@394 -- # pt= 00:02:45.479 09:13:57 -- scripts/common.sh@395 -- # return 1 00:02:45.480 09:13:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:45.480 1+0 records in 00:02:45.480 1+0 records out 00:02:45.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00211793 s, 495 MB/s 00:02:45.480 09:13:57 -- spdk/autotest.sh@105 -- # sync 00:02:45.480 09:13:57 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:45.480 09:13:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:45.480 09:13:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:50.756 09:14:02 -- spdk/autotest.sh@111 -- # uname -s 00:02:50.756 09:14:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:50.756 09:14:02 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:50.756 09:14:02 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:52.661 Hugepages 00:02:52.661 node hugesize free / total 00:02:52.661 node0 1048576kB 0 / 0 00:02:52.661 node0 2048kB 0 / 0 00:02:52.661 node1 1048576kB 0 / 0 00:02:52.661 node1 2048kB 0 / 0 00:02:52.661 00:02:52.661 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:52.661 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:52.661 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:52.661 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:52.661 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:52.661 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:52.661 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:52.919 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:52.919 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:52.919 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:52.919 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:52.919 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:52.919 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:52.919 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:52.919 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:52.919 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:52.919 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:52.919 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:52.919 09:14:05 -- spdk/autotest.sh@117 -- # uname -s 00:02:52.919 09:14:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:52.919 09:14:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:52.919 09:14:05 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:55.454 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:55.454 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:55.454 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:55.454 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:55.454 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:55.454 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:55.454 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:55.454 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:55.454 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:55.454 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:55.454 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:55.454 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:55.454 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:55.454 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:55.454 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:55.454 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:56.391 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:56.391 09:14:08 -- common/autotest_common.sh@1517 -- # sleep 1 00:02:57.327 09:14:09 -- common/autotest_common.sh@1518 -- # bdfs=() 00:02:57.327 09:14:09 -- common/autotest_common.sh@1518 -- # local bdfs 00:02:57.327 09:14:09 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:02:57.327 09:14:09 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:02:57.327 09:14:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:02:57.327 09:14:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:02:57.327 09:14:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:02:57.327 09:14:09 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:02:57.327 09:14:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:02:57.585 09:14:09 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:02:57.585 09:14:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:02:57.585 09:14:09 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.124 Waiting for block devices as requested 00:03:00.124 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:00.124 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:00.124 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:00.382 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:00.382 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:00.382 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:00.382 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:00.641 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:00.641 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:00.641 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:00.900 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:00.900 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:00.900 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:00.900 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:01.159 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:01.159 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:01.159 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:01.418 09:14:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:01.418 09:14:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:01.418 09:14:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:01.418 09:14:13 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:01.418 09:14:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:01.418 09:14:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:01.418 09:14:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:01.418 09:14:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:01.418 09:14:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:01.419 09:14:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:01.419 09:14:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:01.419 09:14:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:01.419 09:14:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:01.419 09:14:13 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:01.419 09:14:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:01.419 09:14:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:01.419 09:14:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:01.419 09:14:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:01.419 09:14:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:01.419 09:14:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:01.419 09:14:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:01.419 09:14:13 -- common/autotest_common.sh@1543 -- # continue 00:03:01.419 09:14:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:01.419 09:14:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:01.419 09:14:13 -- common/autotest_common.sh@10 -- # set +x 00:03:01.419 09:14:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:01.419 09:14:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:01.419 09:14:13 -- common/autotest_common.sh@10 -- # set +x 00:03:01.419 09:14:13 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:04.705 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:04.705 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:04.705 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:04.705 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:04.705 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:04.705 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:04.705 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:04.705 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:04.705 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:04.705 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:04.705 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:04.705 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:04.705 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:04.705 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:04.705 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:04.705 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:04.963 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:05.222 09:14:17 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:05.222 09:14:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:05.222 09:14:17 -- common/autotest_common.sh@10 -- # set +x 00:03:05.222 09:14:17 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:05.222 09:14:17 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:05.222 09:14:17 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:05.222 09:14:17 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:05.222 09:14:17 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:05.222 09:14:17 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:05.222 09:14:17 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:05.222 09:14:17 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:05.222 09:14:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:05.222 09:14:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:05.222 09:14:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:05.222 09:14:17 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:05.222 09:14:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:05.222 09:14:17 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:05.222 09:14:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:05.222 09:14:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:05.222 09:14:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:05.222 09:14:17 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:05.222 09:14:17 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:05.222 09:14:17 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:05.222 09:14:17 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:05.222 09:14:17 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:05.222 09:14:17 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:05.222 09:14:17 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3129051 00:03:05.222 09:14:17 -- common/autotest_common.sh@1585 -- # waitforlisten 3129051 00:03:05.222 09:14:17 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:05.222 09:14:17 -- common/autotest_common.sh@835 -- # '[' -z 3129051 ']' 00:03:05.222 09:14:17 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:05.222 09:14:17 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:05.222 09:14:17 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:05.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:05.222 09:14:17 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:05.222 09:14:17 -- common/autotest_common.sh@10 -- # set +x 00:03:05.480 [2024-12-13 09:14:17.589330] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:05.480 [2024-12-13 09:14:17.589379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129051 ] 00:03:05.480 [2024-12-13 09:14:17.653398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:05.480 [2024-12-13 09:14:17.696351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:05.738 09:14:17 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:05.739 09:14:17 -- common/autotest_common.sh@868 -- # return 0 00:03:05.739 09:14:17 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:05.739 09:14:17 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:05.739 09:14:17 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:09.024 nvme0n1 00:03:09.024 09:14:20 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:09.024 [2024-12-13 09:14:21.057973] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:09.024 [2024-12-13 09:14:21.058001] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:09.024 request: 00:03:09.024 { 00:03:09.024 "nvme_ctrlr_name": "nvme0", 00:03:09.024 "password": "test", 00:03:09.024 "method": "bdev_nvme_opal_revert", 00:03:09.024 "req_id": 1 00:03:09.024 } 00:03:09.024 Got JSON-RPC error response 00:03:09.024 response: 00:03:09.024 { 00:03:09.024 "code": -32603, 00:03:09.024 "message": "Internal error" 00:03:09.024 } 00:03:09.024 09:14:21 -- common/autotest_common.sh@1591 -- # true 00:03:09.024 09:14:21 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:09.024 09:14:21 -- common/autotest_common.sh@1595 -- # killprocess 3129051 00:03:09.024 09:14:21 -- common/autotest_common.sh@954 -- # '[' -z 3129051 ']' 00:03:09.024 09:14:21 -- common/autotest_common.sh@958 -- # kill -0 3129051 00:03:09.024 09:14:21 -- common/autotest_common.sh@959 -- # uname 00:03:09.024 09:14:21 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:09.024 09:14:21 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129051 00:03:09.024 09:14:21 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:09.024 09:14:21 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:09.024 09:14:21 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129051' 00:03:09.024 killing process with pid 3129051 00:03:09.024 09:14:21 -- common/autotest_common.sh@973 -- # kill 3129051 00:03:09.024 09:14:21 -- common/autotest_common.sh@978 -- # wait 3129051 00:03:10.399 09:14:22 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:10.399 09:14:22 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:10.399 09:14:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:10.399 09:14:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:10.399 09:14:22 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:10.399 09:14:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:10.399 09:14:22 -- common/autotest_common.sh@10 -- # set +x 00:03:10.399 09:14:22 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:10.399 09:14:22 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:10.399 09:14:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:10.399 09:14:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:10.399 09:14:22 -- common/autotest_common.sh@10 -- # set +x 00:03:10.399 ************************************ 00:03:10.399 START TEST env 00:03:10.399 ************************************ 00:03:10.399 09:14:22 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:10.658 * Looking for test storage... 00:03:10.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:10.658 09:14:22 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:10.658 09:14:22 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:10.658 09:14:22 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:10.658 09:14:22 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:10.658 09:14:22 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:10.658 09:14:22 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:10.658 09:14:22 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:10.658 09:14:22 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:10.658 09:14:22 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:10.658 09:14:22 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:10.658 09:14:22 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:10.658 09:14:22 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:10.658 09:14:22 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:10.658 09:14:22 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:10.658 09:14:22 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:10.658 09:14:22 env -- scripts/common.sh@344 -- # case "$op" in 00:03:10.658 09:14:22 env -- scripts/common.sh@345 -- # : 1 00:03:10.658 09:14:22 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:10.658 09:14:22 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:10.658 09:14:22 env -- scripts/common.sh@365 -- # decimal 1 00:03:10.658 09:14:22 env -- scripts/common.sh@353 -- # local d=1 00:03:10.658 09:14:22 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:10.658 09:14:22 env -- scripts/common.sh@355 -- # echo 1 00:03:10.658 09:14:22 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:10.658 09:14:22 env -- scripts/common.sh@366 -- # decimal 2 00:03:10.658 09:14:22 env -- scripts/common.sh@353 -- # local d=2 00:03:10.658 09:14:22 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:10.658 09:14:22 env -- scripts/common.sh@355 -- # echo 2 00:03:10.658 09:14:22 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:10.658 09:14:22 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:10.658 09:14:22 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:10.658 09:14:22 env -- scripts/common.sh@368 -- # return 0 00:03:10.658 09:14:22 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:10.658 09:14:22 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:10.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.658 --rc genhtml_branch_coverage=1 00:03:10.658 --rc genhtml_function_coverage=1 00:03:10.658 --rc genhtml_legend=1 00:03:10.658 --rc geninfo_all_blocks=1 00:03:10.658 --rc geninfo_unexecuted_blocks=1 00:03:10.658 00:03:10.658 ' 00:03:10.658 09:14:22 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:10.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.658 --rc genhtml_branch_coverage=1 00:03:10.658 --rc genhtml_function_coverage=1 00:03:10.658 --rc genhtml_legend=1 00:03:10.658 --rc geninfo_all_blocks=1 00:03:10.658 --rc geninfo_unexecuted_blocks=1 00:03:10.658 00:03:10.658 ' 00:03:10.658 09:14:22 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:10.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.658 --rc genhtml_branch_coverage=1 00:03:10.658 --rc genhtml_function_coverage=1 00:03:10.658 --rc genhtml_legend=1 00:03:10.658 --rc geninfo_all_blocks=1 00:03:10.658 --rc geninfo_unexecuted_blocks=1 00:03:10.658 00:03:10.658 ' 00:03:10.658 09:14:22 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:10.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.658 --rc genhtml_branch_coverage=1 00:03:10.658 --rc genhtml_function_coverage=1 00:03:10.658 --rc genhtml_legend=1 00:03:10.658 --rc geninfo_all_blocks=1 00:03:10.658 --rc geninfo_unexecuted_blocks=1 00:03:10.658 00:03:10.658 ' 00:03:10.658 09:14:22 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:10.658 09:14:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:10.658 09:14:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:10.658 09:14:22 env -- common/autotest_common.sh@10 -- # set +x 00:03:10.658 ************************************ 00:03:10.658 START TEST env_memory 00:03:10.658 ************************************ 00:03:10.659 09:14:22 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:10.659 00:03:10.659 00:03:10.659 CUnit - A unit testing framework for C - Version 2.1-3 00:03:10.659 http://cunit.sourceforge.net/ 00:03:10.659 00:03:10.659 00:03:10.659 Suite: memory 00:03:10.659 Test: alloc and free memory map ...[2024-12-13 09:14:22.963552] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:10.659 passed 00:03:10.659 Test: mem map translation ...[2024-12-13 09:14:22.982995] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:10.659 [2024-12-13 09:14:22.983010] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:10.659 [2024-12-13 09:14:22.983046] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:10.659 [2024-12-13 09:14:22.983054] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:10.659 passed 00:03:10.659 Test: mem map registration ...[2024-12-13 09:14:23.020052] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:10.659 [2024-12-13 09:14:23.020067] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:10.918 passed 00:03:10.918 Test: mem map adjacent registrations ...passed 00:03:10.918 00:03:10.918 Run Summary: Type Total Ran Passed Failed Inactive 00:03:10.918 suites 1 1 n/a 0 0 00:03:10.918 tests 4 4 4 0 0 00:03:10.918 asserts 152 152 152 0 n/a 00:03:10.918 00:03:10.918 Elapsed time = 0.135 seconds 00:03:10.918 00:03:10.918 real 0m0.148s 00:03:10.918 user 0m0.139s 00:03:10.918 sys 0m0.008s 00:03:10.918 09:14:23 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:10.918 09:14:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:10.918 ************************************ 00:03:10.918 END TEST env_memory 00:03:10.918 ************************************ 00:03:10.918 09:14:23 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:10.918 09:14:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:10.918 09:14:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:10.918 09:14:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:10.918 ************************************ 00:03:10.918 START TEST env_vtophys 00:03:10.918 ************************************ 00:03:10.918 09:14:23 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:10.918 EAL: lib.eal log level changed from notice to debug 00:03:10.918 EAL: Detected lcore 0 as core 0 on socket 0 00:03:10.918 EAL: Detected lcore 1 as core 1 on socket 0 00:03:10.918 EAL: Detected lcore 2 as core 2 on socket 0 00:03:10.918 EAL: Detected lcore 3 as core 3 on socket 0 00:03:10.918 EAL: Detected lcore 4 as core 4 on socket 0 00:03:10.918 EAL: Detected lcore 5 as core 5 on socket 0 00:03:10.918 EAL: Detected lcore 6 as core 6 on socket 0 00:03:10.918 EAL: Detected lcore 7 as core 8 on socket 0 00:03:10.918 EAL: Detected lcore 8 as core 9 on socket 0 00:03:10.918 EAL: Detected lcore 9 as core 10 on socket 0 00:03:10.918 EAL: Detected lcore 10 as core 11 on socket 0 00:03:10.918 EAL: Detected lcore 11 as core 12 on socket 0 00:03:10.918 EAL: Detected lcore 12 as core 13 on socket 0 00:03:10.918 EAL: Detected lcore 13 as core 16 on socket 0 00:03:10.918 EAL: Detected lcore 14 as core 17 on socket 0 00:03:10.918 EAL: Detected lcore 15 as core 18 on socket 0 00:03:10.918 EAL: Detected lcore 16 as core 19 on socket 0 00:03:10.918 EAL: Detected lcore 17 as core 20 on socket 0 00:03:10.918 EAL: Detected lcore 18 as core 21 on socket 0 00:03:10.918 EAL: Detected lcore 19 as core 25 on socket 0 00:03:10.918 EAL: Detected lcore 20 as core 26 on socket 0 00:03:10.918 EAL: Detected lcore 21 as core 27 on socket 0 00:03:10.918 EAL: Detected lcore 22 as core 28 on socket 0 00:03:10.918 EAL: Detected lcore 23 as core 29 on socket 0 00:03:10.918 EAL: Detected lcore 24 as core 0 on socket 1 00:03:10.918 EAL: Detected lcore 25 as core 1 on socket 1 00:03:10.918 EAL: Detected lcore 26 as core 2 on socket 1 00:03:10.918 EAL: Detected lcore 27 as core 3 on socket 1 00:03:10.918 EAL: Detected lcore 28 as core 4 on socket 1 00:03:10.918 EAL: Detected lcore 29 as core 5 on socket 1 00:03:10.918 EAL: Detected lcore 30 as core 6 on socket 1 00:03:10.918 EAL: Detected lcore 31 as core 8 on socket 1 00:03:10.918 EAL: Detected lcore 32 as core 9 on socket 1 00:03:10.918 EAL: Detected lcore 33 as core 10 on socket 1 00:03:10.918 EAL: Detected lcore 34 as core 11 on socket 1 00:03:10.918 EAL: Detected lcore 35 as core 12 on socket 1 00:03:10.918 EAL: Detected lcore 36 as core 13 on socket 1 00:03:10.918 EAL: Detected lcore 37 as core 16 on socket 1 00:03:10.918 EAL: Detected lcore 38 as core 17 on socket 1 00:03:10.918 EAL: Detected lcore 39 as core 18 on socket 1 00:03:10.918 EAL: Detected lcore 40 as core 19 on socket 1 00:03:10.918 EAL: Detected lcore 41 as core 20 on socket 1 00:03:10.918 EAL: Detected lcore 42 as core 21 on socket 1 00:03:10.918 EAL: Detected lcore 43 as core 25 on socket 1 00:03:10.918 EAL: Detected lcore 44 as core 26 on socket 1 00:03:10.918 EAL: Detected lcore 45 as core 27 on socket 1 00:03:10.918 EAL: Detected lcore 46 as core 28 on socket 1 00:03:10.918 EAL: Detected lcore 47 as core 29 on socket 1 00:03:10.918 EAL: Detected lcore 48 as core 0 on socket 0 00:03:10.918 EAL: Detected lcore 49 as core 1 on socket 0 00:03:10.918 EAL: Detected lcore 50 as core 2 on socket 0 00:03:10.918 EAL: Detected lcore 51 as core 3 on socket 0 00:03:10.918 EAL: Detected lcore 52 as core 4 on socket 0 00:03:10.918 EAL: Detected lcore 53 as core 5 on socket 0 00:03:10.918 EAL: Detected lcore 54 as core 6 on socket 0 00:03:10.918 EAL: Detected lcore 55 as core 8 on socket 0 00:03:10.918 EAL: Detected lcore 56 as core 9 on socket 0 00:03:10.918 EAL: Detected lcore 57 as core 10 on socket 0 00:03:10.918 EAL: Detected lcore 58 as core 11 on socket 0 00:03:10.918 EAL: Detected lcore 59 as core 12 on socket 0 00:03:10.918 EAL: Detected lcore 60 as core 13 on socket 0 00:03:10.918 EAL: Detected lcore 61 as core 16 on socket 0 00:03:10.918 EAL: Detected lcore 62 as core 17 on socket 0 00:03:10.919 EAL: Detected lcore 63 as core 18 on socket 0 00:03:10.919 EAL: Detected lcore 64 as core 19 on socket 0 00:03:10.919 EAL: Detected lcore 65 as core 20 on socket 0 00:03:10.919 EAL: Detected lcore 66 as core 21 on socket 0 00:03:10.919 EAL: Detected lcore 67 as core 25 on socket 0 00:03:10.919 EAL: Detected lcore 68 as core 26 on socket 0 00:03:10.919 EAL: Detected lcore 69 as core 27 on socket 0 00:03:10.919 EAL: Detected lcore 70 as core 28 on socket 0 00:03:10.919 EAL: Detected lcore 71 as core 29 on socket 0 00:03:10.919 EAL: Detected lcore 72 as core 0 on socket 1 00:03:10.919 EAL: Detected lcore 73 as core 1 on socket 1 00:03:10.919 EAL: Detected lcore 74 as core 2 on socket 1 00:03:10.919 EAL: Detected lcore 75 as core 3 on socket 1 00:03:10.919 EAL: Detected lcore 76 as core 4 on socket 1 00:03:10.919 EAL: Detected lcore 77 as core 5 on socket 1 00:03:10.919 EAL: Detected lcore 78 as core 6 on socket 1 00:03:10.919 EAL: Detected lcore 79 as core 8 on socket 1 00:03:10.919 EAL: Detected lcore 80 as core 9 on socket 1 00:03:10.919 EAL: Detected lcore 81 as core 10 on socket 1 00:03:10.919 EAL: Detected lcore 82 as core 11 on socket 1 00:03:10.919 EAL: Detected lcore 83 as core 12 on socket 1 00:03:10.919 EAL: Detected lcore 84 as core 13 on socket 1 00:03:10.919 EAL: Detected lcore 85 as core 16 on socket 1 00:03:10.919 EAL: Detected lcore 86 as core 17 on socket 1 00:03:10.919 EAL: Detected lcore 87 as core 18 on socket 1 00:03:10.919 EAL: Detected lcore 88 as core 19 on socket 1 00:03:10.919 EAL: Detected lcore 89 as core 20 on socket 1 00:03:10.919 EAL: Detected lcore 90 as core 21 on socket 1 00:03:10.919 EAL: Detected lcore 91 as core 25 on socket 1 00:03:10.919 EAL: Detected lcore 92 as core 26 on socket 1 00:03:10.919 EAL: Detected lcore 93 as core 27 on socket 1 00:03:10.919 EAL: Detected lcore 94 as core 28 on socket 1 00:03:10.919 EAL: Detected lcore 95 as core 29 on socket 1 00:03:10.919 EAL: Maximum logical cores by configuration: 128 00:03:10.919 EAL: Detected CPU lcores: 96 00:03:10.919 EAL: Detected NUMA nodes: 2 00:03:10.919 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:10.919 EAL: Detected shared linkage of DPDK 00:03:10.919 EAL: No shared files mode enabled, IPC will be disabled 00:03:10.919 EAL: Bus pci wants IOVA as 'DC' 00:03:10.919 EAL: Buses did not request a specific IOVA mode. 00:03:10.919 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:10.919 EAL: Selected IOVA mode 'VA' 00:03:10.919 EAL: Probing VFIO support... 00:03:10.919 EAL: IOMMU type 1 (Type 1) is supported 00:03:10.919 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:10.919 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:10.919 EAL: VFIO support initialized 00:03:10.919 EAL: Ask a virtual area of 0x2e000 bytes 00:03:10.919 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:10.919 EAL: Setting up physically contiguous memory... 00:03:10.919 EAL: Setting maximum number of open files to 524288 00:03:10.919 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:10.919 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:10.919 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:10.919 EAL: Ask a virtual area of 0x61000 bytes 00:03:10.919 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:10.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:10.919 EAL: Ask a virtual area of 0x400000000 bytes 00:03:10.919 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:10.919 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:10.919 EAL: Ask a virtual area of 0x61000 bytes 00:03:10.919 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:10.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:10.919 EAL: Ask a virtual area of 0x400000000 bytes 00:03:10.919 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:10.919 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:10.919 EAL: Ask a virtual area of 0x61000 bytes 00:03:10.919 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:10.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:10.919 EAL: Ask a virtual area of 0x400000000 bytes 00:03:10.919 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:10.919 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:10.919 EAL: Ask a virtual area of 0x61000 bytes 00:03:10.919 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:10.919 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:10.919 EAL: Ask a virtual area of 0x400000000 bytes 00:03:10.919 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:10.919 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:10.919 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:10.919 EAL: Ask a virtual area of 0x61000 bytes 00:03:10.919 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:10.919 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:10.919 EAL: Ask a virtual area of 0x400000000 bytes 00:03:10.919 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:10.919 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:10.919 EAL: Ask a virtual area of 0x61000 bytes 00:03:10.919 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:10.919 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:10.919 EAL: Ask a virtual area of 0x400000000 bytes 00:03:10.919 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:10.919 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:10.919 EAL: Ask a virtual area of 0x61000 bytes 00:03:10.919 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:10.919 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:10.919 EAL: Ask a virtual area of 0x400000000 bytes 00:03:10.919 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:10.919 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:10.919 EAL: Ask a virtual area of 0x61000 bytes 00:03:10.919 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:10.919 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:10.919 EAL: Ask a virtual area of 0x400000000 bytes 00:03:10.919 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:10.919 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:10.919 EAL: Hugepages will be freed exactly as allocated. 00:03:10.919 EAL: No shared files mode enabled, IPC is disabled 00:03:10.919 EAL: No shared files mode enabled, IPC is disabled 00:03:10.919 EAL: TSC frequency is ~2100000 KHz 00:03:10.919 EAL: Main lcore 0 is ready (tid=7f735b19ba00;cpuset=[0]) 00:03:10.919 EAL: Trying to obtain current memory policy. 00:03:10.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:10.919 EAL: Restoring previous memory policy: 0 00:03:10.919 EAL: request: mp_malloc_sync 00:03:10.919 EAL: No shared files mode enabled, IPC is disabled 00:03:10.919 EAL: Heap on socket 0 was expanded by 2MB 00:03:10.919 EAL: No shared files mode enabled, IPC is disabled 00:03:10.919 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:10.919 EAL: Mem event callback 'spdk:(nil)' registered 00:03:10.919 00:03:10.919 00:03:10.919 CUnit - A unit testing framework for C - Version 2.1-3 00:03:10.919 http://cunit.sourceforge.net/ 00:03:10.919 00:03:10.919 00:03:10.919 Suite: components_suite 00:03:10.919 Test: vtophys_malloc_test ...passed 00:03:10.919 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:10.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:10.919 EAL: Restoring previous memory policy: 4 00:03:10.919 EAL: Calling mem event callback 'spdk:(nil)' 00:03:10.919 EAL: request: mp_malloc_sync 00:03:10.919 EAL: No shared files mode enabled, IPC is disabled 00:03:10.919 EAL: Heap on socket 0 was expanded by 4MB 00:03:10.919 EAL: Calling mem event callback 'spdk:(nil)' 00:03:10.919 EAL: request: mp_malloc_sync 00:03:10.919 EAL: No shared files mode enabled, IPC is disabled 00:03:10.919 EAL: Heap on socket 0 was shrunk by 4MB 00:03:10.919 EAL: Trying to obtain current memory policy. 00:03:10.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:10.919 EAL: Restoring previous memory policy: 4 00:03:10.919 EAL: Calling mem event callback 'spdk:(nil)' 00:03:10.919 EAL: request: mp_malloc_sync 00:03:10.919 EAL: No shared files mode enabled, IPC is disabled 00:03:10.919 EAL: Heap on socket 0 was expanded by 6MB 00:03:10.919 EAL: Calling mem event callback 'spdk:(nil)' 00:03:10.919 EAL: request: mp_malloc_sync 00:03:10.919 EAL: No shared files mode enabled, IPC is disabled 00:03:10.919 EAL: Heap on socket 0 was shrunk by 6MB 00:03:10.919 EAL: Trying to obtain current memory policy. 00:03:10.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:10.919 EAL: Restoring previous memory policy: 4 00:03:10.919 EAL: Calling mem event callback 'spdk:(nil)' 00:03:10.919 EAL: request: mp_malloc_sync 00:03:10.919 EAL: No shared files mode enabled, IPC is disabled 00:03:10.919 EAL: Heap on socket 0 was expanded by 10MB 00:03:10.919 EAL: Calling mem event callback 'spdk:(nil)' 00:03:10.919 EAL: request: mp_malloc_sync 00:03:10.919 EAL: No shared files mode enabled, IPC is disabled 00:03:10.919 EAL: Heap on socket 0 was shrunk by 10MB 00:03:10.919 EAL: Trying to obtain current memory policy. 00:03:10.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:10.919 EAL: Restoring previous memory policy: 4 00:03:10.919 EAL: Calling mem event callback 'spdk:(nil)' 00:03:10.919 EAL: request: mp_malloc_sync 00:03:10.919 EAL: No shared files mode enabled, IPC is disabled 00:03:10.919 EAL: Heap on socket 0 was expanded by 18MB 00:03:10.919 EAL: Calling mem event callback 'spdk:(nil)' 00:03:10.919 EAL: request: mp_malloc_sync 00:03:10.919 EAL: No shared files mode enabled, IPC is disabled 00:03:10.919 EAL: Heap on socket 0 was shrunk by 18MB 00:03:10.919 EAL: Trying to obtain current memory policy. 00:03:10.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:10.919 EAL: Restoring previous memory policy: 4 00:03:10.919 EAL: Calling mem event callback 'spdk:(nil)' 00:03:10.919 EAL: request: mp_malloc_sync 00:03:10.919 EAL: No shared files mode enabled, IPC is disabled 00:03:10.919 EAL: Heap on socket 0 was expanded by 34MB 00:03:10.919 EAL: Calling mem event callback 'spdk:(nil)' 00:03:10.919 EAL: request: mp_malloc_sync 00:03:10.919 EAL: No shared files mode enabled, IPC is disabled 00:03:10.919 EAL: Heap on socket 0 was shrunk by 34MB 00:03:10.919 EAL: Trying to obtain current memory policy. 00:03:10.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:10.919 EAL: Restoring previous memory policy: 4 00:03:10.919 EAL: Calling mem event callback 'spdk:(nil)' 00:03:10.919 EAL: request: mp_malloc_sync 00:03:10.920 EAL: No shared files mode enabled, IPC is disabled 00:03:10.920 EAL: Heap on socket 0 was expanded by 66MB 00:03:10.920 EAL: Calling mem event callback 'spdk:(nil)' 00:03:10.920 EAL: request: mp_malloc_sync 00:03:10.920 EAL: No shared files mode enabled, IPC is disabled 00:03:10.920 EAL: Heap on socket 0 was shrunk by 66MB 00:03:10.920 EAL: Trying to obtain current memory policy. 00:03:10.920 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:11.178 EAL: Restoring previous memory policy: 4 00:03:11.178 EAL: Calling mem event callback 'spdk:(nil)' 00:03:11.178 EAL: request: mp_malloc_sync 00:03:11.178 EAL: No shared files mode enabled, IPC is disabled 00:03:11.178 EAL: Heap on socket 0 was expanded by 130MB 00:03:11.178 EAL: Calling mem event callback 'spdk:(nil)' 00:03:11.178 EAL: request: mp_malloc_sync 00:03:11.178 EAL: No shared files mode enabled, IPC is disabled 00:03:11.178 EAL: Heap on socket 0 was shrunk by 130MB 00:03:11.178 EAL: Trying to obtain current memory policy. 00:03:11.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:11.178 EAL: Restoring previous memory policy: 4 00:03:11.178 EAL: Calling mem event callback 'spdk:(nil)' 00:03:11.178 EAL: request: mp_malloc_sync 00:03:11.178 EAL: No shared files mode enabled, IPC is disabled 00:03:11.178 EAL: Heap on socket 0 was expanded by 258MB 00:03:11.178 EAL: Calling mem event callback 'spdk:(nil)' 00:03:11.178 EAL: request: mp_malloc_sync 00:03:11.178 EAL: No shared files mode enabled, IPC is disabled 00:03:11.178 EAL: Heap on socket 0 was shrunk by 258MB 00:03:11.178 EAL: Trying to obtain current memory policy. 00:03:11.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:11.437 EAL: Restoring previous memory policy: 4 00:03:11.437 EAL: Calling mem event callback 'spdk:(nil)' 00:03:11.437 EAL: request: mp_malloc_sync 00:03:11.437 EAL: No shared files mode enabled, IPC is disabled 00:03:11.437 EAL: Heap on socket 0 was expanded by 514MB 00:03:11.437 EAL: Calling mem event callback 'spdk:(nil)' 00:03:11.437 EAL: request: mp_malloc_sync 00:03:11.437 EAL: No shared files mode enabled, IPC is disabled 00:03:11.437 EAL: Heap on socket 0 was shrunk by 514MB 00:03:11.437 EAL: Trying to obtain current memory policy. 00:03:11.437 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:11.695 EAL: Restoring previous memory policy: 4 00:03:11.695 EAL: Calling mem event callback 'spdk:(nil)' 00:03:11.695 EAL: request: mp_malloc_sync 00:03:11.695 EAL: No shared files mode enabled, IPC is disabled 00:03:11.695 EAL: Heap on socket 0 was expanded by 1026MB 00:03:11.695 EAL: Calling mem event callback 'spdk:(nil)' 00:03:11.954 EAL: request: mp_malloc_sync 00:03:11.954 EAL: No shared files mode enabled, IPC is disabled 00:03:11.954 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:11.954 passed 00:03:11.954 00:03:11.954 Run Summary: Type Total Ran Passed Failed Inactive 00:03:11.954 suites 1 1 n/a 0 0 00:03:11.954 tests 2 2 2 0 0 00:03:11.954 asserts 497 497 497 0 n/a 00:03:11.954 00:03:11.954 Elapsed time = 0.960 seconds 00:03:11.954 EAL: Calling mem event callback 'spdk:(nil)' 00:03:11.954 EAL: request: mp_malloc_sync 00:03:11.954 EAL: No shared files mode enabled, IPC is disabled 00:03:11.954 EAL: Heap on socket 0 was shrunk by 2MB 00:03:11.954 EAL: No shared files mode enabled, IPC is disabled 00:03:11.954 EAL: No shared files mode enabled, IPC is disabled 00:03:11.954 EAL: No shared files mode enabled, IPC is disabled 00:03:11.954 00:03:11.954 real 0m1.083s 00:03:11.954 user 0m0.634s 00:03:11.954 sys 0m0.418s 00:03:11.954 09:14:24 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:11.954 09:14:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:11.954 ************************************ 00:03:11.954 END TEST env_vtophys 00:03:11.954 ************************************ 00:03:11.954 09:14:24 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:11.954 09:14:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:11.954 09:14:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:11.954 09:14:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:11.954 ************************************ 00:03:11.954 START TEST env_pci 00:03:11.954 ************************************ 00:03:11.954 09:14:24 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:11.954 00:03:11.954 00:03:11.954 CUnit - A unit testing framework for C - Version 2.1-3 00:03:11.954 http://cunit.sourceforge.net/ 00:03:11.954 00:03:11.954 00:03:11.954 Suite: pci 00:03:11.954 Test: pci_hook ...[2024-12-13 09:14:24.304973] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3130286 has claimed it 00:03:12.213 EAL: Cannot find device (10000:00:01.0) 00:03:12.213 EAL: Failed to attach device on primary process 00:03:12.213 passed 00:03:12.213 00:03:12.213 Run Summary: Type Total Ran Passed Failed Inactive 00:03:12.213 suites 1 1 n/a 0 0 00:03:12.213 tests 1 1 1 0 0 00:03:12.213 asserts 25 25 25 0 n/a 00:03:12.213 00:03:12.213 Elapsed time = 0.028 seconds 00:03:12.213 00:03:12.213 real 0m0.047s 00:03:12.213 user 0m0.016s 00:03:12.213 sys 0m0.031s 00:03:12.213 09:14:24 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:12.213 09:14:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:12.213 ************************************ 00:03:12.213 END TEST env_pci 00:03:12.213 ************************************ 00:03:12.213 09:14:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:12.213 09:14:24 env -- env/env.sh@15 -- # uname 00:03:12.213 09:14:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:12.213 09:14:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:12.213 09:14:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:12.213 09:14:24 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:12.213 09:14:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:12.213 09:14:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:12.213 ************************************ 00:03:12.213 START TEST env_dpdk_post_init 00:03:12.213 ************************************ 00:03:12.213 09:14:24 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:12.213 EAL: Detected CPU lcores: 96 00:03:12.213 EAL: Detected NUMA nodes: 2 00:03:12.213 EAL: Detected shared linkage of DPDK 00:03:12.213 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:12.213 EAL: Selected IOVA mode 'VA' 00:03:12.213 EAL: VFIO support initialized 00:03:12.213 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:12.213 EAL: Using IOMMU type 1 (Type 1) 00:03:12.213 EAL: Ignore mapping IO port bar(1) 00:03:12.213 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:12.213 EAL: Ignore mapping IO port bar(1) 00:03:12.213 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:12.213 EAL: Ignore mapping IO port bar(1) 00:03:12.213 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:12.213 EAL: Ignore mapping IO port bar(1) 00:03:12.213 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:12.472 EAL: Ignore mapping IO port bar(1) 00:03:12.472 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:12.472 EAL: Ignore mapping IO port bar(1) 00:03:12.472 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:12.472 EAL: Ignore mapping IO port bar(1) 00:03:12.472 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:12.472 EAL: Ignore mapping IO port bar(1) 00:03:12.472 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:13.038 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:13.038 EAL: Ignore mapping IO port bar(1) 00:03:13.038 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:13.038 EAL: Ignore mapping IO port bar(1) 00:03:13.038 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:13.038 EAL: Ignore mapping IO port bar(1) 00:03:13.038 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:13.038 EAL: Ignore mapping IO port bar(1) 00:03:13.038 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:13.296 EAL: Ignore mapping IO port bar(1) 00:03:13.296 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:13.296 EAL: Ignore mapping IO port bar(1) 00:03:13.296 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:13.296 EAL: Ignore mapping IO port bar(1) 00:03:13.296 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:13.296 EAL: Ignore mapping IO port bar(1) 00:03:13.296 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:16.576 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:16.576 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:16.576 Starting DPDK initialization... 00:03:16.576 Starting SPDK post initialization... 00:03:16.576 SPDK NVMe probe 00:03:16.576 Attaching to 0000:5e:00.0 00:03:16.576 Attached to 0000:5e:00.0 00:03:16.576 Cleaning up... 00:03:16.576 00:03:16.576 real 0m4.345s 00:03:16.576 user 0m2.978s 00:03:16.577 sys 0m0.444s 00:03:16.577 09:14:28 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:16.577 09:14:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:16.577 ************************************ 00:03:16.577 END TEST env_dpdk_post_init 00:03:16.577 ************************************ 00:03:16.577 09:14:28 env -- env/env.sh@26 -- # uname 00:03:16.577 09:14:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:16.577 09:14:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:16.577 09:14:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:16.577 09:14:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:16.577 09:14:28 env -- common/autotest_common.sh@10 -- # set +x 00:03:16.577 ************************************ 00:03:16.577 START TEST env_mem_callbacks 00:03:16.577 ************************************ 00:03:16.577 09:14:28 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:16.577 EAL: Detected CPU lcores: 96 00:03:16.577 EAL: Detected NUMA nodes: 2 00:03:16.577 EAL: Detected shared linkage of DPDK 00:03:16.577 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:16.577 EAL: Selected IOVA mode 'VA' 00:03:16.577 EAL: VFIO support initialized 00:03:16.577 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:16.577 00:03:16.577 00:03:16.577 CUnit - A unit testing framework for C - Version 2.1-3 00:03:16.577 http://cunit.sourceforge.net/ 00:03:16.577 00:03:16.577 00:03:16.577 Suite: memory 00:03:16.577 Test: test ... 00:03:16.577 register 0x200000200000 2097152 00:03:16.577 malloc 3145728 00:03:16.577 register 0x200000400000 4194304 00:03:16.577 buf 0x200000500000 len 3145728 PASSED 00:03:16.577 malloc 64 00:03:16.577 buf 0x2000004fff40 len 64 PASSED 00:03:16.577 malloc 4194304 00:03:16.577 register 0x200000800000 6291456 00:03:16.577 buf 0x200000a00000 len 4194304 PASSED 00:03:16.577 free 0x200000500000 3145728 00:03:16.577 free 0x2000004fff40 64 00:03:16.577 unregister 0x200000400000 4194304 PASSED 00:03:16.577 free 0x200000a00000 4194304 00:03:16.577 unregister 0x200000800000 6291456 PASSED 00:03:16.577 malloc 8388608 00:03:16.577 register 0x200000400000 10485760 00:03:16.577 buf 0x200000600000 len 8388608 PASSED 00:03:16.577 free 0x200000600000 8388608 00:03:16.577 unregister 0x200000400000 10485760 PASSED 00:03:16.577 passed 00:03:16.577 00:03:16.577 Run Summary: Type Total Ran Passed Failed Inactive 00:03:16.577 suites 1 1 n/a 0 0 00:03:16.577 tests 1 1 1 0 0 00:03:16.577 asserts 15 15 15 0 n/a 00:03:16.577 00:03:16.577 Elapsed time = 0.005 seconds 00:03:16.577 00:03:16.577 real 0m0.054s 00:03:16.577 user 0m0.017s 00:03:16.577 sys 0m0.037s 00:03:16.577 09:14:28 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:16.577 09:14:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:16.577 ************************************ 00:03:16.577 END TEST env_mem_callbacks 00:03:16.577 ************************************ 00:03:16.577 00:03:16.577 real 0m6.207s 00:03:16.577 user 0m4.015s 00:03:16.577 sys 0m1.275s 00:03:16.577 09:14:28 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:16.577 09:14:28 env -- common/autotest_common.sh@10 -- # set +x 00:03:16.577 ************************************ 00:03:16.577 END TEST env 00:03:16.577 ************************************ 00:03:16.835 09:14:28 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:16.835 09:14:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:16.835 09:14:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:16.835 09:14:28 -- common/autotest_common.sh@10 -- # set +x 00:03:16.835 ************************************ 00:03:16.835 START TEST rpc 00:03:16.835 ************************************ 00:03:16.835 09:14:28 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:16.835 * Looking for test storage... 00:03:16.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:16.835 09:14:29 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:16.835 09:14:29 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:16.835 09:14:29 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:16.835 09:14:29 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:16.835 09:14:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:16.835 09:14:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:16.835 09:14:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:16.835 09:14:29 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:16.835 09:14:29 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:16.835 09:14:29 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:16.835 09:14:29 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:16.835 09:14:29 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:16.835 09:14:29 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:16.835 09:14:29 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:16.835 09:14:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:16.835 09:14:29 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:16.835 09:14:29 rpc -- scripts/common.sh@345 -- # : 1 00:03:16.835 09:14:29 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:16.835 09:14:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:16.835 09:14:29 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:16.835 09:14:29 rpc -- scripts/common.sh@353 -- # local d=1 00:03:16.835 09:14:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:16.835 09:14:29 rpc -- scripts/common.sh@355 -- # echo 1 00:03:16.835 09:14:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:16.835 09:14:29 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:16.835 09:14:29 rpc -- scripts/common.sh@353 -- # local d=2 00:03:16.835 09:14:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:16.835 09:14:29 rpc -- scripts/common.sh@355 -- # echo 2 00:03:16.835 09:14:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:16.835 09:14:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:16.835 09:14:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:16.835 09:14:29 rpc -- scripts/common.sh@368 -- # return 0 00:03:16.835 09:14:29 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:16.835 09:14:29 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:16.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.835 --rc genhtml_branch_coverage=1 00:03:16.835 --rc genhtml_function_coverage=1 00:03:16.835 --rc genhtml_legend=1 00:03:16.835 --rc geninfo_all_blocks=1 00:03:16.835 --rc geninfo_unexecuted_blocks=1 00:03:16.835 00:03:16.835 ' 00:03:16.835 09:14:29 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:16.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.835 --rc genhtml_branch_coverage=1 00:03:16.835 --rc genhtml_function_coverage=1 00:03:16.835 --rc genhtml_legend=1 00:03:16.835 --rc geninfo_all_blocks=1 00:03:16.835 --rc geninfo_unexecuted_blocks=1 00:03:16.835 00:03:16.835 ' 00:03:16.835 09:14:29 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:16.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.835 --rc genhtml_branch_coverage=1 00:03:16.836 --rc genhtml_function_coverage=1 00:03:16.836 --rc genhtml_legend=1 00:03:16.836 --rc geninfo_all_blocks=1 00:03:16.836 --rc geninfo_unexecuted_blocks=1 00:03:16.836 00:03:16.836 ' 00:03:16.836 09:14:29 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:16.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.836 --rc genhtml_branch_coverage=1 00:03:16.836 --rc genhtml_function_coverage=1 00:03:16.836 --rc genhtml_legend=1 00:03:16.836 --rc geninfo_all_blocks=1 00:03:16.836 --rc geninfo_unexecuted_blocks=1 00:03:16.836 00:03:16.836 ' 00:03:16.836 09:14:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3131138 00:03:16.836 09:14:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:16.836 09:14:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3131138 00:03:16.836 09:14:29 rpc -- common/autotest_common.sh@835 -- # '[' -z 3131138 ']' 00:03:16.836 09:14:29 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:16.836 09:14:29 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:16.836 09:14:29 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:16.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:16.836 09:14:29 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:16.836 09:14:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:16.836 09:14:29 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:17.094 [2024-12-13 09:14:29.207596] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:17.094 [2024-12-13 09:14:29.207642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131138 ] 00:03:17.094 [2024-12-13 09:14:29.269542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:17.094 [2024-12-13 09:14:29.310438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:17.094 [2024-12-13 09:14:29.310477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3131138' to capture a snapshot of events at runtime. 00:03:17.094 [2024-12-13 09:14:29.310484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:17.094 [2024-12-13 09:14:29.310489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:17.094 [2024-12-13 09:14:29.310494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3131138 for offline analysis/debug. 00:03:17.094 [2024-12-13 09:14:29.310967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:17.352 09:14:29 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:17.352 09:14:29 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:17.352 09:14:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:17.352 09:14:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:17.353 09:14:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:17.353 09:14:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:17.353 09:14:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:17.353 09:14:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:17.353 09:14:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:17.353 ************************************ 00:03:17.353 START TEST rpc_integrity 00:03:17.353 ************************************ 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:17.353 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.353 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:17.353 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:17.353 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:17.353 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.353 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:17.353 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.353 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:17.353 { 00:03:17.353 "name": "Malloc0", 00:03:17.353 "aliases": [ 00:03:17.353 "b492e4f7-cde4-457b-bded-4cec3fd65460" 00:03:17.353 ], 00:03:17.353 "product_name": "Malloc disk", 00:03:17.353 "block_size": 512, 00:03:17.353 "num_blocks": 16384, 00:03:17.353 "uuid": "b492e4f7-cde4-457b-bded-4cec3fd65460", 00:03:17.353 "assigned_rate_limits": { 00:03:17.353 "rw_ios_per_sec": 0, 00:03:17.353 "rw_mbytes_per_sec": 0, 00:03:17.353 "r_mbytes_per_sec": 0, 00:03:17.353 "w_mbytes_per_sec": 0 00:03:17.353 }, 00:03:17.353 "claimed": false, 00:03:17.353 "zoned": false, 00:03:17.353 "supported_io_types": { 00:03:17.353 "read": true, 00:03:17.353 "write": true, 00:03:17.353 "unmap": true, 00:03:17.353 "flush": true, 00:03:17.353 "reset": true, 00:03:17.353 "nvme_admin": false, 00:03:17.353 "nvme_io": false, 00:03:17.353 "nvme_io_md": false, 00:03:17.353 "write_zeroes": true, 00:03:17.353 "zcopy": true, 00:03:17.353 "get_zone_info": false, 00:03:17.353 "zone_management": false, 00:03:17.353 "zone_append": false, 00:03:17.353 "compare": false, 00:03:17.353 "compare_and_write": false, 00:03:17.353 "abort": true, 00:03:17.353 "seek_hole": false, 00:03:17.353 "seek_data": false, 00:03:17.353 "copy": true, 00:03:17.353 "nvme_iov_md": false 00:03:17.353 }, 00:03:17.353 "memory_domains": [ 00:03:17.353 { 00:03:17.353 "dma_device_id": "system", 00:03:17.353 "dma_device_type": 1 00:03:17.353 }, 00:03:17.353 { 00:03:17.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:17.353 "dma_device_type": 2 00:03:17.353 } 00:03:17.353 ], 00:03:17.353 "driver_specific": {} 00:03:17.353 } 00:03:17.353 ]' 00:03:17.353 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:17.353 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:17.353 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.353 [2024-12-13 09:14:29.672221] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:17.353 [2024-12-13 09:14:29.672248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:17.353 [2024-12-13 09:14:29.672260] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10c6740 00:03:17.353 [2024-12-13 09:14:29.672266] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:17.353 [2024-12-13 09:14:29.673337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:17.353 [2024-12-13 09:14:29.673358] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:17.353 Passthru0 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.353 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.353 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.353 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:17.353 { 00:03:17.353 "name": "Malloc0", 00:03:17.353 "aliases": [ 00:03:17.353 "b492e4f7-cde4-457b-bded-4cec3fd65460" 00:03:17.353 ], 00:03:17.353 "product_name": "Malloc disk", 00:03:17.353 "block_size": 512, 00:03:17.353 "num_blocks": 16384, 00:03:17.353 "uuid": "b492e4f7-cde4-457b-bded-4cec3fd65460", 00:03:17.353 "assigned_rate_limits": { 00:03:17.353 "rw_ios_per_sec": 0, 00:03:17.353 "rw_mbytes_per_sec": 0, 00:03:17.353 "r_mbytes_per_sec": 0, 00:03:17.353 "w_mbytes_per_sec": 0 00:03:17.353 }, 00:03:17.353 "claimed": true, 00:03:17.353 "claim_type": "exclusive_write", 00:03:17.353 "zoned": false, 00:03:17.353 "supported_io_types": { 00:03:17.353 "read": true, 00:03:17.353 "write": true, 00:03:17.353 "unmap": true, 00:03:17.353 "flush": true, 00:03:17.353 "reset": true, 00:03:17.353 "nvme_admin": false, 00:03:17.353 "nvme_io": false, 00:03:17.353 "nvme_io_md": false, 00:03:17.353 "write_zeroes": true, 00:03:17.353 "zcopy": true, 00:03:17.353 "get_zone_info": false, 00:03:17.353 "zone_management": false, 00:03:17.353 "zone_append": false, 00:03:17.353 "compare": false, 00:03:17.353 "compare_and_write": false, 00:03:17.353 "abort": true, 00:03:17.353 "seek_hole": false, 00:03:17.353 "seek_data": false, 00:03:17.353 "copy": true, 00:03:17.353 "nvme_iov_md": false 00:03:17.353 }, 00:03:17.353 "memory_domains": [ 00:03:17.353 { 00:03:17.353 "dma_device_id": "system", 00:03:17.353 "dma_device_type": 1 00:03:17.353 }, 00:03:17.353 { 00:03:17.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:17.353 "dma_device_type": 2 00:03:17.353 } 00:03:17.353 ], 00:03:17.353 "driver_specific": {} 00:03:17.353 }, 00:03:17.353 { 00:03:17.353 "name": "Passthru0", 00:03:17.353 "aliases": [ 00:03:17.353 "58f0574c-cc7b-51d2-857d-f25448c7fd6b" 00:03:17.353 ], 00:03:17.353 "product_name": "passthru", 00:03:17.353 "block_size": 512, 00:03:17.353 "num_blocks": 16384, 00:03:17.353 "uuid": "58f0574c-cc7b-51d2-857d-f25448c7fd6b", 00:03:17.353 "assigned_rate_limits": { 00:03:17.353 "rw_ios_per_sec": 0, 00:03:17.353 "rw_mbytes_per_sec": 0, 00:03:17.353 "r_mbytes_per_sec": 0, 00:03:17.353 "w_mbytes_per_sec": 0 00:03:17.353 }, 00:03:17.353 "claimed": false, 00:03:17.353 "zoned": false, 00:03:17.353 "supported_io_types": { 00:03:17.353 "read": true, 00:03:17.353 "write": true, 00:03:17.353 "unmap": true, 00:03:17.353 "flush": true, 00:03:17.353 "reset": true, 00:03:17.353 "nvme_admin": false, 00:03:17.353 "nvme_io": false, 00:03:17.353 "nvme_io_md": false, 00:03:17.353 "write_zeroes": true, 00:03:17.353 "zcopy": true, 00:03:17.353 "get_zone_info": false, 00:03:17.353 "zone_management": false, 00:03:17.353 "zone_append": false, 00:03:17.353 "compare": false, 00:03:17.353 "compare_and_write": false, 00:03:17.353 "abort": true, 00:03:17.353 "seek_hole": false, 00:03:17.353 "seek_data": false, 00:03:17.353 "copy": true, 00:03:17.353 "nvme_iov_md": false 00:03:17.353 }, 00:03:17.353 "memory_domains": [ 00:03:17.353 { 00:03:17.353 "dma_device_id": "system", 00:03:17.353 "dma_device_type": 1 00:03:17.353 }, 00:03:17.353 { 00:03:17.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:17.353 "dma_device_type": 2 00:03:17.353 } 00:03:17.353 ], 00:03:17.353 "driver_specific": { 00:03:17.353 "passthru": { 00:03:17.353 "name": "Passthru0", 00:03:17.353 "base_bdev_name": "Malloc0" 00:03:17.353 } 00:03:17.353 } 00:03:17.353 } 00:03:17.353 ]' 00:03:17.353 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:17.613 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:17.613 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:17.613 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.613 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.613 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.613 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:17.613 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.613 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.613 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.613 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:17.613 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.613 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.613 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.613 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:17.613 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:17.613 09:14:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:17.613 00:03:17.613 real 0m0.269s 00:03:17.613 user 0m0.163s 00:03:17.613 sys 0m0.038s 00:03:17.613 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:17.613 09:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.613 ************************************ 00:03:17.613 END TEST rpc_integrity 00:03:17.613 ************************************ 00:03:17.613 09:14:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:17.613 09:14:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:17.613 09:14:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:17.613 09:14:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:17.613 ************************************ 00:03:17.613 START TEST rpc_plugins 00:03:17.613 ************************************ 00:03:17.613 09:14:29 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:17.613 09:14:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:17.613 09:14:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.613 09:14:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:17.613 09:14:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.613 09:14:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:17.613 09:14:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:17.613 09:14:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.613 09:14:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:17.613 09:14:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.613 09:14:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:17.613 { 00:03:17.613 "name": "Malloc1", 00:03:17.613 "aliases": [ 00:03:17.613 "b41db5b1-fcc2-4c9d-9940-5f1fcad48194" 00:03:17.613 ], 00:03:17.613 "product_name": "Malloc disk", 00:03:17.613 "block_size": 4096, 00:03:17.613 "num_blocks": 256, 00:03:17.613 "uuid": "b41db5b1-fcc2-4c9d-9940-5f1fcad48194", 00:03:17.613 "assigned_rate_limits": { 00:03:17.613 "rw_ios_per_sec": 0, 00:03:17.613 "rw_mbytes_per_sec": 0, 00:03:17.613 "r_mbytes_per_sec": 0, 00:03:17.613 "w_mbytes_per_sec": 0 00:03:17.613 }, 00:03:17.613 "claimed": false, 00:03:17.613 "zoned": false, 00:03:17.613 "supported_io_types": { 00:03:17.613 "read": true, 00:03:17.613 "write": true, 00:03:17.613 "unmap": true, 00:03:17.613 "flush": true, 00:03:17.613 "reset": true, 00:03:17.613 "nvme_admin": false, 00:03:17.613 "nvme_io": false, 00:03:17.613 "nvme_io_md": false, 00:03:17.613 "write_zeroes": true, 00:03:17.613 "zcopy": true, 00:03:17.613 "get_zone_info": false, 00:03:17.613 "zone_management": false, 00:03:17.613 "zone_append": false, 00:03:17.613 "compare": false, 00:03:17.613 "compare_and_write": false, 00:03:17.613 "abort": true, 00:03:17.613 "seek_hole": false, 00:03:17.613 "seek_data": false, 00:03:17.613 "copy": true, 00:03:17.613 "nvme_iov_md": false 00:03:17.613 }, 00:03:17.613 "memory_domains": [ 00:03:17.613 { 00:03:17.613 "dma_device_id": "system", 00:03:17.613 "dma_device_type": 1 00:03:17.613 }, 00:03:17.613 { 00:03:17.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:17.613 "dma_device_type": 2 00:03:17.613 } 00:03:17.613 ], 00:03:17.613 "driver_specific": {} 00:03:17.613 } 00:03:17.613 ]' 00:03:17.613 09:14:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:17.613 09:14:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:17.613 09:14:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:17.613 09:14:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.613 09:14:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:17.613 09:14:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.613 09:14:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:17.613 09:14:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.613 09:14:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:17.613 09:14:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.613 09:14:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:17.613 09:14:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:17.872 09:14:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:17.872 00:03:17.872 real 0m0.133s 00:03:17.872 user 0m0.081s 00:03:17.872 sys 0m0.015s 00:03:17.872 09:14:30 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:17.872 09:14:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:17.872 ************************************ 00:03:17.872 END TEST rpc_plugins 00:03:17.872 ************************************ 00:03:17.872 09:14:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:17.872 09:14:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:17.872 09:14:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:17.872 09:14:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:17.872 ************************************ 00:03:17.872 START TEST rpc_trace_cmd_test 00:03:17.872 ************************************ 00:03:17.872 09:14:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:17.872 09:14:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:17.872 09:14:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:17.872 09:14:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.872 09:14:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:17.872 09:14:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.872 09:14:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:17.872 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3131138", 00:03:17.872 "tpoint_group_mask": "0x8", 00:03:17.872 "iscsi_conn": { 00:03:17.872 "mask": "0x2", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 }, 00:03:17.872 "scsi": { 00:03:17.872 "mask": "0x4", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 }, 00:03:17.872 "bdev": { 00:03:17.872 "mask": "0x8", 00:03:17.872 "tpoint_mask": "0xffffffffffffffff" 00:03:17.872 }, 00:03:17.872 "nvmf_rdma": { 00:03:17.872 "mask": "0x10", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 }, 00:03:17.872 "nvmf_tcp": { 00:03:17.872 "mask": "0x20", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 }, 00:03:17.872 "ftl": { 00:03:17.872 "mask": "0x40", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 }, 00:03:17.872 "blobfs": { 00:03:17.872 "mask": "0x80", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 }, 00:03:17.872 "dsa": { 00:03:17.872 "mask": "0x200", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 }, 00:03:17.872 "thread": { 00:03:17.872 "mask": "0x400", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 }, 00:03:17.872 "nvme_pcie": { 00:03:17.872 "mask": "0x800", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 }, 00:03:17.872 "iaa": { 00:03:17.872 "mask": "0x1000", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 }, 00:03:17.872 "nvme_tcp": { 00:03:17.872 "mask": "0x2000", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 }, 00:03:17.872 "bdev_nvme": { 00:03:17.872 "mask": "0x4000", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 }, 00:03:17.872 "sock": { 00:03:17.872 "mask": "0x8000", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 }, 00:03:17.872 "blob": { 00:03:17.872 "mask": "0x10000", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 }, 00:03:17.872 "bdev_raid": { 00:03:17.872 "mask": "0x20000", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 }, 00:03:17.872 "scheduler": { 00:03:17.872 "mask": "0x40000", 00:03:17.872 "tpoint_mask": "0x0" 00:03:17.872 } 00:03:17.872 }' 00:03:17.872 09:14:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:17.872 09:14:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:17.872 09:14:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:17.872 09:14:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:17.872 09:14:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:17.872 09:14:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:17.872 09:14:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:18.130 09:14:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:18.130 09:14:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:18.130 09:14:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:18.130 00:03:18.130 real 0m0.229s 00:03:18.130 user 0m0.192s 00:03:18.130 sys 0m0.030s 00:03:18.130 09:14:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:18.130 09:14:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:18.130 ************************************ 00:03:18.130 END TEST rpc_trace_cmd_test 00:03:18.130 ************************************ 00:03:18.130 09:14:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:18.130 09:14:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:18.130 09:14:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:18.130 09:14:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:18.131 09:14:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:18.131 09:14:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:18.131 ************************************ 00:03:18.131 START TEST rpc_daemon_integrity 00:03:18.131 ************************************ 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:18.131 { 00:03:18.131 "name": "Malloc2", 00:03:18.131 "aliases": [ 00:03:18.131 "eb823001-38be-4f13-a22e-cd7f4f0bfc44" 00:03:18.131 ], 00:03:18.131 "product_name": "Malloc disk", 00:03:18.131 "block_size": 512, 00:03:18.131 "num_blocks": 16384, 00:03:18.131 "uuid": "eb823001-38be-4f13-a22e-cd7f4f0bfc44", 00:03:18.131 "assigned_rate_limits": { 00:03:18.131 "rw_ios_per_sec": 0, 00:03:18.131 "rw_mbytes_per_sec": 0, 00:03:18.131 "r_mbytes_per_sec": 0, 00:03:18.131 "w_mbytes_per_sec": 0 00:03:18.131 }, 00:03:18.131 "claimed": false, 00:03:18.131 "zoned": false, 00:03:18.131 "supported_io_types": { 00:03:18.131 "read": true, 00:03:18.131 "write": true, 00:03:18.131 "unmap": true, 00:03:18.131 "flush": true, 00:03:18.131 "reset": true, 00:03:18.131 "nvme_admin": false, 00:03:18.131 "nvme_io": false, 00:03:18.131 "nvme_io_md": false, 00:03:18.131 "write_zeroes": true, 00:03:18.131 "zcopy": true, 00:03:18.131 "get_zone_info": false, 00:03:18.131 "zone_management": false, 00:03:18.131 "zone_append": false, 00:03:18.131 "compare": false, 00:03:18.131 "compare_and_write": false, 00:03:18.131 "abort": true, 00:03:18.131 "seek_hole": false, 00:03:18.131 "seek_data": false, 00:03:18.131 "copy": true, 00:03:18.131 "nvme_iov_md": false 00:03:18.131 }, 00:03:18.131 "memory_domains": [ 00:03:18.131 { 00:03:18.131 "dma_device_id": "system", 00:03:18.131 "dma_device_type": 1 00:03:18.131 }, 00:03:18.131 { 00:03:18.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:18.131 "dma_device_type": 2 00:03:18.131 } 00:03:18.131 ], 00:03:18.131 "driver_specific": {} 00:03:18.131 } 00:03:18.131 ]' 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.131 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.131 [2024-12-13 09:14:30.494434] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:18.131 [2024-12-13 09:14:30.494465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:18.131 [2024-12-13 09:14:30.494477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1093fe0 00:03:18.131 [2024-12-13 09:14:30.494484] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:18.131 [2024-12-13 09:14:30.495457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:18.131 [2024-12-13 09:14:30.495477] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:18.389 Passthru0 00:03:18.389 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.389 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:18.389 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.389 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.389 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.389 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:18.389 { 00:03:18.389 "name": "Malloc2", 00:03:18.389 "aliases": [ 00:03:18.390 "eb823001-38be-4f13-a22e-cd7f4f0bfc44" 00:03:18.390 ], 00:03:18.390 "product_name": "Malloc disk", 00:03:18.390 "block_size": 512, 00:03:18.390 "num_blocks": 16384, 00:03:18.390 "uuid": "eb823001-38be-4f13-a22e-cd7f4f0bfc44", 00:03:18.390 "assigned_rate_limits": { 00:03:18.390 "rw_ios_per_sec": 0, 00:03:18.390 "rw_mbytes_per_sec": 0, 00:03:18.390 "r_mbytes_per_sec": 0, 00:03:18.390 "w_mbytes_per_sec": 0 00:03:18.390 }, 00:03:18.390 "claimed": true, 00:03:18.390 "claim_type": "exclusive_write", 00:03:18.390 "zoned": false, 00:03:18.390 "supported_io_types": { 00:03:18.390 "read": true, 00:03:18.390 "write": true, 00:03:18.390 "unmap": true, 00:03:18.390 "flush": true, 00:03:18.390 "reset": true, 00:03:18.390 "nvme_admin": false, 00:03:18.390 "nvme_io": false, 00:03:18.390 "nvme_io_md": false, 00:03:18.390 "write_zeroes": true, 00:03:18.390 "zcopy": true, 00:03:18.390 "get_zone_info": false, 00:03:18.390 "zone_management": false, 00:03:18.390 "zone_append": false, 00:03:18.390 "compare": false, 00:03:18.390 "compare_and_write": false, 00:03:18.390 "abort": true, 00:03:18.390 "seek_hole": false, 00:03:18.390 "seek_data": false, 00:03:18.390 "copy": true, 00:03:18.390 "nvme_iov_md": false 00:03:18.390 }, 00:03:18.390 "memory_domains": [ 00:03:18.390 { 00:03:18.390 "dma_device_id": "system", 00:03:18.390 "dma_device_type": 1 00:03:18.390 }, 00:03:18.390 { 00:03:18.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:18.390 "dma_device_type": 2 00:03:18.390 } 00:03:18.390 ], 00:03:18.390 "driver_specific": {} 00:03:18.390 }, 00:03:18.390 { 00:03:18.390 "name": "Passthru0", 00:03:18.390 "aliases": [ 00:03:18.390 "a523c663-f56a-5a78-a847-7ad1be2c75d1" 00:03:18.390 ], 00:03:18.390 "product_name": "passthru", 00:03:18.390 "block_size": 512, 00:03:18.390 "num_blocks": 16384, 00:03:18.390 "uuid": "a523c663-f56a-5a78-a847-7ad1be2c75d1", 00:03:18.390 "assigned_rate_limits": { 00:03:18.390 "rw_ios_per_sec": 0, 00:03:18.390 "rw_mbytes_per_sec": 0, 00:03:18.390 "r_mbytes_per_sec": 0, 00:03:18.390 "w_mbytes_per_sec": 0 00:03:18.390 }, 00:03:18.390 "claimed": false, 00:03:18.390 "zoned": false, 00:03:18.390 "supported_io_types": { 00:03:18.390 "read": true, 00:03:18.390 "write": true, 00:03:18.390 "unmap": true, 00:03:18.390 "flush": true, 00:03:18.390 "reset": true, 00:03:18.390 "nvme_admin": false, 00:03:18.390 "nvme_io": false, 00:03:18.390 "nvme_io_md": false, 00:03:18.390 "write_zeroes": true, 00:03:18.390 "zcopy": true, 00:03:18.390 "get_zone_info": false, 00:03:18.390 "zone_management": false, 00:03:18.390 "zone_append": false, 00:03:18.390 "compare": false, 00:03:18.390 "compare_and_write": false, 00:03:18.390 "abort": true, 00:03:18.390 "seek_hole": false, 00:03:18.390 "seek_data": false, 00:03:18.390 "copy": true, 00:03:18.390 "nvme_iov_md": false 00:03:18.390 }, 00:03:18.390 "memory_domains": [ 00:03:18.390 { 00:03:18.390 "dma_device_id": "system", 00:03:18.390 "dma_device_type": 1 00:03:18.390 }, 00:03:18.390 { 00:03:18.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:18.390 "dma_device_type": 2 00:03:18.390 } 00:03:18.390 ], 00:03:18.390 "driver_specific": { 00:03:18.390 "passthru": { 00:03:18.390 "name": "Passthru0", 00:03:18.390 "base_bdev_name": "Malloc2" 00:03:18.390 } 00:03:18.390 } 00:03:18.390 } 00:03:18.390 ]' 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:18.390 00:03:18.390 real 0m0.259s 00:03:18.390 user 0m0.158s 00:03:18.390 sys 0m0.039s 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:18.390 09:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:18.390 ************************************ 00:03:18.390 END TEST rpc_daemon_integrity 00:03:18.390 ************************************ 00:03:18.390 09:14:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:18.390 09:14:30 rpc -- rpc/rpc.sh@84 -- # killprocess 3131138 00:03:18.390 09:14:30 rpc -- common/autotest_common.sh@954 -- # '[' -z 3131138 ']' 00:03:18.390 09:14:30 rpc -- common/autotest_common.sh@958 -- # kill -0 3131138 00:03:18.390 09:14:30 rpc -- common/autotest_common.sh@959 -- # uname 00:03:18.390 09:14:30 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:18.390 09:14:30 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131138 00:03:18.390 09:14:30 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:18.390 09:14:30 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:18.390 09:14:30 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131138' 00:03:18.390 killing process with pid 3131138 00:03:18.390 09:14:30 rpc -- common/autotest_common.sh@973 -- # kill 3131138 00:03:18.390 09:14:30 rpc -- common/autotest_common.sh@978 -- # wait 3131138 00:03:18.648 00:03:18.648 real 0m2.021s 00:03:18.649 user 0m2.564s 00:03:18.649 sys 0m0.661s 00:03:18.649 09:14:31 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:18.649 09:14:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:18.649 ************************************ 00:03:18.649 END TEST rpc 00:03:18.649 ************************************ 00:03:18.905 09:14:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:18.905 09:14:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:18.905 09:14:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:18.905 09:14:31 -- common/autotest_common.sh@10 -- # set +x 00:03:18.905 ************************************ 00:03:18.905 START TEST skip_rpc 00:03:18.905 ************************************ 00:03:18.905 09:14:31 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:18.905 * Looking for test storage... 00:03:18.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:18.905 09:14:31 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:18.905 09:14:31 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:18.905 09:14:31 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:18.905 09:14:31 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:18.905 09:14:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:18.906 09:14:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:18.906 09:14:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:18.906 09:14:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:18.906 09:14:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:18.906 09:14:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:18.906 09:14:31 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:18.906 09:14:31 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:18.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.906 --rc genhtml_branch_coverage=1 00:03:18.906 --rc genhtml_function_coverage=1 00:03:18.906 --rc genhtml_legend=1 00:03:18.906 --rc geninfo_all_blocks=1 00:03:18.906 --rc geninfo_unexecuted_blocks=1 00:03:18.906 00:03:18.906 ' 00:03:18.906 09:14:31 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:18.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.906 --rc genhtml_branch_coverage=1 00:03:18.906 --rc genhtml_function_coverage=1 00:03:18.906 --rc genhtml_legend=1 00:03:18.906 --rc geninfo_all_blocks=1 00:03:18.906 --rc geninfo_unexecuted_blocks=1 00:03:18.906 00:03:18.906 ' 00:03:18.906 09:14:31 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:18.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.906 --rc genhtml_branch_coverage=1 00:03:18.906 --rc genhtml_function_coverage=1 00:03:18.906 --rc genhtml_legend=1 00:03:18.906 --rc geninfo_all_blocks=1 00:03:18.906 --rc geninfo_unexecuted_blocks=1 00:03:18.906 00:03:18.906 ' 00:03:18.906 09:14:31 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:18.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.906 --rc genhtml_branch_coverage=1 00:03:18.906 --rc genhtml_function_coverage=1 00:03:18.906 --rc genhtml_legend=1 00:03:18.906 --rc geninfo_all_blocks=1 00:03:18.906 --rc geninfo_unexecuted_blocks=1 00:03:18.906 00:03:18.906 ' 00:03:18.906 09:14:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:18.906 09:14:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:18.906 09:14:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:18.906 09:14:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:18.906 09:14:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:18.906 09:14:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:19.164 ************************************ 00:03:19.164 START TEST skip_rpc 00:03:19.164 ************************************ 00:03:19.164 09:14:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:19.164 09:14:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3131761 00:03:19.164 09:14:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:19.164 09:14:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:19.164 09:14:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:19.164 [2024-12-13 09:14:31.332159] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:19.164 [2024-12-13 09:14:31.332193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131761 ] 00:03:19.164 [2024-12-13 09:14:31.393789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:19.164 [2024-12-13 09:14:31.432828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3131761 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3131761 ']' 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3131761 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131761 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131761' 00:03:24.429 killing process with pid 3131761 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3131761 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3131761 00:03:24.429 00:03:24.429 real 0m5.361s 00:03:24.429 user 0m5.127s 00:03:24.429 sys 0m0.265s 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:24.429 09:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:24.429 ************************************ 00:03:24.429 END TEST skip_rpc 00:03:24.429 ************************************ 00:03:24.429 09:14:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:24.429 09:14:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:24.429 09:14:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:24.429 09:14:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:24.429 ************************************ 00:03:24.429 START TEST skip_rpc_with_json 00:03:24.429 ************************************ 00:03:24.429 09:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:24.429 09:14:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:24.429 09:14:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3132683 00:03:24.429 09:14:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:24.429 09:14:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:24.429 09:14:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3132683 00:03:24.429 09:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3132683 ']' 00:03:24.429 09:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:24.429 09:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:24.429 09:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:24.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:24.429 09:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:24.429 09:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:24.429 [2024-12-13 09:14:36.760071] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:24.429 [2024-12-13 09:14:36.760109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132683 ] 00:03:24.687 [2024-12-13 09:14:36.822269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:24.687 [2024-12-13 09:14:36.864063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:24.945 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:24.945 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:24.945 09:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:24.945 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.945 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:24.945 [2024-12-13 09:14:37.075888] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:24.945 request: 00:03:24.945 { 00:03:24.945 "trtype": "tcp", 00:03:24.945 "method": "nvmf_get_transports", 00:03:24.945 "req_id": 1 00:03:24.945 } 00:03:24.945 Got JSON-RPC error response 00:03:24.945 response: 00:03:24.945 { 00:03:24.945 "code": -19, 00:03:24.945 "message": "No such device" 00:03:24.945 } 00:03:24.945 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:24.945 09:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:24.945 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.945 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:24.945 [2024-12-13 09:14:37.087992] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:24.945 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.945 09:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:24.945 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.945 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:24.945 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.945 09:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:24.945 { 00:03:24.945 "subsystems": [ 00:03:24.945 { 00:03:24.945 "subsystem": "fsdev", 00:03:24.945 "config": [ 00:03:24.945 { 00:03:24.945 "method": "fsdev_set_opts", 00:03:24.945 "params": { 00:03:24.945 "fsdev_io_pool_size": 65535, 00:03:24.945 "fsdev_io_cache_size": 256 00:03:24.945 } 00:03:24.945 } 00:03:24.945 ] 00:03:24.945 }, 00:03:24.945 { 00:03:24.945 "subsystem": "vfio_user_target", 00:03:24.945 "config": null 00:03:24.945 }, 00:03:24.945 { 00:03:24.945 "subsystem": "keyring", 00:03:24.945 "config": [] 00:03:24.945 }, 00:03:24.945 { 00:03:24.945 "subsystem": "iobuf", 00:03:24.945 "config": [ 00:03:24.945 { 00:03:24.945 "method": "iobuf_set_options", 00:03:24.945 "params": { 00:03:24.945 "small_pool_count": 8192, 00:03:24.945 "large_pool_count": 1024, 00:03:24.945 "small_bufsize": 8192, 00:03:24.945 "large_bufsize": 135168, 00:03:24.945 "enable_numa": false 00:03:24.945 } 00:03:24.945 } 00:03:24.945 ] 00:03:24.945 }, 00:03:24.945 { 00:03:24.945 "subsystem": "sock", 00:03:24.945 "config": [ 00:03:24.945 { 00:03:24.945 "method": "sock_set_default_impl", 00:03:24.945 "params": { 00:03:24.945 "impl_name": "posix" 00:03:24.945 } 00:03:24.945 }, 00:03:24.945 { 00:03:24.945 "method": "sock_impl_set_options", 00:03:24.945 "params": { 00:03:24.945 "impl_name": "ssl", 00:03:24.945 "recv_buf_size": 4096, 00:03:24.945 "send_buf_size": 4096, 00:03:24.945 "enable_recv_pipe": true, 00:03:24.945 "enable_quickack": false, 00:03:24.945 "enable_placement_id": 0, 00:03:24.945 "enable_zerocopy_send_server": true, 00:03:24.945 "enable_zerocopy_send_client": false, 00:03:24.945 "zerocopy_threshold": 0, 00:03:24.945 "tls_version": 0, 00:03:24.945 "enable_ktls": false 00:03:24.945 } 00:03:24.945 }, 00:03:24.945 { 00:03:24.945 "method": "sock_impl_set_options", 00:03:24.945 "params": { 00:03:24.945 "impl_name": "posix", 00:03:24.945 "recv_buf_size": 2097152, 00:03:24.945 "send_buf_size": 2097152, 00:03:24.945 "enable_recv_pipe": true, 00:03:24.945 "enable_quickack": false, 00:03:24.945 "enable_placement_id": 0, 00:03:24.945 "enable_zerocopy_send_server": true, 00:03:24.945 "enable_zerocopy_send_client": false, 00:03:24.945 "zerocopy_threshold": 0, 00:03:24.945 "tls_version": 0, 00:03:24.945 "enable_ktls": false 00:03:24.945 } 00:03:24.945 } 00:03:24.945 ] 00:03:24.945 }, 00:03:24.945 { 00:03:24.945 "subsystem": "vmd", 00:03:24.945 "config": [] 00:03:24.945 }, 00:03:24.945 { 00:03:24.945 "subsystem": "accel", 00:03:24.945 "config": [ 00:03:24.945 { 00:03:24.945 "method": "accel_set_options", 00:03:24.945 "params": { 00:03:24.945 "small_cache_size": 128, 00:03:24.945 "large_cache_size": 16, 00:03:24.945 "task_count": 2048, 00:03:24.945 "sequence_count": 2048, 00:03:24.945 "buf_count": 2048 00:03:24.945 } 00:03:24.945 } 00:03:24.945 ] 00:03:24.945 }, 00:03:24.945 { 00:03:24.945 "subsystem": "bdev", 00:03:24.945 "config": [ 00:03:24.945 { 00:03:24.945 "method": "bdev_set_options", 00:03:24.945 "params": { 00:03:24.945 "bdev_io_pool_size": 65535, 00:03:24.945 "bdev_io_cache_size": 256, 00:03:24.945 "bdev_auto_examine": true, 00:03:24.945 "iobuf_small_cache_size": 128, 00:03:24.945 "iobuf_large_cache_size": 16 00:03:24.945 } 00:03:24.945 }, 00:03:24.945 { 00:03:24.945 "method": "bdev_raid_set_options", 00:03:24.945 "params": { 00:03:24.945 "process_window_size_kb": 1024, 00:03:24.945 "process_max_bandwidth_mb_sec": 0 00:03:24.945 } 00:03:24.945 }, 00:03:24.945 { 00:03:24.945 "method": "bdev_iscsi_set_options", 00:03:24.945 "params": { 00:03:24.945 "timeout_sec": 30 00:03:24.945 } 00:03:24.945 }, 00:03:24.945 { 00:03:24.945 "method": "bdev_nvme_set_options", 00:03:24.945 "params": { 00:03:24.945 "action_on_timeout": "none", 00:03:24.945 "timeout_us": 0, 00:03:24.945 "timeout_admin_us": 0, 00:03:24.945 "keep_alive_timeout_ms": 10000, 00:03:24.945 "arbitration_burst": 0, 00:03:24.945 "low_priority_weight": 0, 00:03:24.945 "medium_priority_weight": 0, 00:03:24.945 "high_priority_weight": 0, 00:03:24.945 "nvme_adminq_poll_period_us": 10000, 00:03:24.945 "nvme_ioq_poll_period_us": 0, 00:03:24.945 "io_queue_requests": 0, 00:03:24.945 "delay_cmd_submit": true, 00:03:24.945 "transport_retry_count": 4, 00:03:24.945 "bdev_retry_count": 3, 00:03:24.945 "transport_ack_timeout": 0, 00:03:24.945 "ctrlr_loss_timeout_sec": 0, 00:03:24.945 "reconnect_delay_sec": 0, 00:03:24.945 "fast_io_fail_timeout_sec": 0, 00:03:24.945 "disable_auto_failback": false, 00:03:24.945 "generate_uuids": false, 00:03:24.945 "transport_tos": 0, 00:03:24.945 "nvme_error_stat": false, 00:03:24.945 "rdma_srq_size": 0, 00:03:24.945 "io_path_stat": false, 00:03:24.945 "allow_accel_sequence": false, 00:03:24.945 "rdma_max_cq_size": 0, 00:03:24.945 "rdma_cm_event_timeout_ms": 0, 00:03:24.945 "dhchap_digests": [ 00:03:24.945 "sha256", 00:03:24.945 "sha384", 00:03:24.945 "sha512" 00:03:24.945 ], 00:03:24.945 "dhchap_dhgroups": [ 00:03:24.945 "null", 00:03:24.945 "ffdhe2048", 00:03:24.945 "ffdhe3072", 00:03:24.945 "ffdhe4096", 00:03:24.945 "ffdhe6144", 00:03:24.945 "ffdhe8192" 00:03:24.945 ] 00:03:24.945 } 00:03:24.945 }, 00:03:24.945 { 00:03:24.945 "method": "bdev_nvme_set_hotplug", 00:03:24.945 "params": { 00:03:24.945 "period_us": 100000, 00:03:24.945 "enable": false 00:03:24.945 } 00:03:24.945 }, 00:03:24.945 { 00:03:24.945 "method": "bdev_wait_for_examine" 00:03:24.945 } 00:03:24.945 ] 00:03:24.945 }, 00:03:24.946 { 00:03:24.946 "subsystem": "scsi", 00:03:24.946 "config": null 00:03:24.946 }, 00:03:24.946 { 00:03:24.946 "subsystem": "scheduler", 00:03:24.946 "config": [ 00:03:24.946 { 00:03:24.946 "method": "framework_set_scheduler", 00:03:24.946 "params": { 00:03:24.946 "name": "static" 00:03:24.946 } 00:03:24.946 } 00:03:24.946 ] 00:03:24.946 }, 00:03:24.946 { 00:03:24.946 "subsystem": "vhost_scsi", 00:03:24.946 "config": [] 00:03:24.946 }, 00:03:24.946 { 00:03:24.946 "subsystem": "vhost_blk", 00:03:24.946 "config": [] 00:03:24.946 }, 00:03:24.946 { 00:03:24.946 "subsystem": "ublk", 00:03:24.946 "config": [] 00:03:24.946 }, 00:03:24.946 { 00:03:24.946 "subsystem": "nbd", 00:03:24.946 "config": [] 00:03:24.946 }, 00:03:24.946 { 00:03:24.946 "subsystem": "nvmf", 00:03:24.946 "config": [ 00:03:24.946 { 00:03:24.946 "method": "nvmf_set_config", 00:03:24.946 "params": { 00:03:24.946 "discovery_filter": "match_any", 00:03:24.946 "admin_cmd_passthru": { 00:03:24.946 "identify_ctrlr": false 00:03:24.946 }, 00:03:24.946 "dhchap_digests": [ 00:03:24.946 "sha256", 00:03:24.946 "sha384", 00:03:24.946 "sha512" 00:03:24.946 ], 00:03:24.946 "dhchap_dhgroups": [ 00:03:24.946 "null", 00:03:24.946 "ffdhe2048", 00:03:24.946 "ffdhe3072", 00:03:24.946 "ffdhe4096", 00:03:24.946 "ffdhe6144", 00:03:24.946 "ffdhe8192" 00:03:24.946 ] 00:03:24.946 } 00:03:24.946 }, 00:03:24.946 { 00:03:24.946 "method": "nvmf_set_max_subsystems", 00:03:24.946 "params": { 00:03:24.946 "max_subsystems": 1024 00:03:24.946 } 00:03:24.946 }, 00:03:24.946 { 00:03:24.946 "method": "nvmf_set_crdt", 00:03:24.946 "params": { 00:03:24.946 "crdt1": 0, 00:03:24.946 "crdt2": 0, 00:03:24.946 "crdt3": 0 00:03:24.946 } 00:03:24.946 }, 00:03:24.946 { 00:03:24.946 "method": "nvmf_create_transport", 00:03:24.946 "params": { 00:03:24.946 "trtype": "TCP", 00:03:24.946 "max_queue_depth": 128, 00:03:24.946 "max_io_qpairs_per_ctrlr": 127, 00:03:24.946 "in_capsule_data_size": 4096, 00:03:24.946 "max_io_size": 131072, 00:03:24.946 "io_unit_size": 131072, 00:03:24.946 "max_aq_depth": 128, 00:03:24.946 "num_shared_buffers": 511, 00:03:24.946 "buf_cache_size": 4294967295, 00:03:24.946 "dif_insert_or_strip": false, 00:03:24.946 "zcopy": false, 00:03:24.946 "c2h_success": true, 00:03:24.946 "sock_priority": 0, 00:03:24.946 "abort_timeout_sec": 1, 00:03:24.946 "ack_timeout": 0, 00:03:24.946 "data_wr_pool_size": 0 00:03:24.946 } 00:03:24.946 } 00:03:24.946 ] 00:03:24.946 }, 00:03:24.946 { 00:03:24.946 "subsystem": "iscsi", 00:03:24.946 "config": [ 00:03:24.946 { 00:03:24.946 "method": "iscsi_set_options", 00:03:24.946 "params": { 00:03:24.946 "node_base": "iqn.2016-06.io.spdk", 00:03:24.946 "max_sessions": 128, 00:03:24.946 "max_connections_per_session": 2, 00:03:24.946 "max_queue_depth": 64, 00:03:24.946 "default_time2wait": 2, 00:03:24.946 "default_time2retain": 20, 00:03:24.946 "first_burst_length": 8192, 00:03:24.946 "immediate_data": true, 00:03:24.946 "allow_duplicated_isid": false, 00:03:24.946 "error_recovery_level": 0, 00:03:24.946 "nop_timeout": 60, 00:03:24.946 "nop_in_interval": 30, 00:03:24.946 "disable_chap": false, 00:03:24.946 "require_chap": false, 00:03:24.946 "mutual_chap": false, 00:03:24.946 "chap_group": 0, 00:03:24.946 "max_large_datain_per_connection": 64, 00:03:24.946 "max_r2t_per_connection": 4, 00:03:24.946 "pdu_pool_size": 36864, 00:03:24.946 "immediate_data_pool_size": 16384, 00:03:24.946 "data_out_pool_size": 2048 00:03:24.946 } 00:03:24.946 } 00:03:24.946 ] 00:03:24.946 } 00:03:24.946 ] 00:03:24.946 } 00:03:24.946 09:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:24.946 09:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3132683 00:03:24.946 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3132683 ']' 00:03:24.946 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3132683 00:03:24.946 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:24.946 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:24.946 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132683 00:03:25.204 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:25.204 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:25.204 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132683' 00:03:25.204 killing process with pid 3132683 00:03:25.204 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3132683 00:03:25.204 09:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3132683 00:03:25.463 09:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3132814 00:03:25.463 09:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:25.463 09:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:30.850 09:14:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3132814 00:03:30.850 09:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3132814 ']' 00:03:30.850 09:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3132814 00:03:30.850 09:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:30.850 09:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:30.850 09:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132814 00:03:30.850 09:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:30.850 09:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:30.850 09:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132814' 00:03:30.850 killing process with pid 3132814 00:03:30.850 09:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3132814 00:03:30.850 09:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3132814 00:03:30.850 09:14:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:30.850 09:14:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:30.850 00:03:30.850 real 0m6.271s 00:03:30.850 user 0m5.995s 00:03:30.850 sys 0m0.575s 00:03:30.850 09:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.850 09:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:30.850 ************************************ 00:03:30.850 END TEST skip_rpc_with_json 00:03:30.850 ************************************ 00:03:30.850 09:14:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:30.850 09:14:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:30.850 09:14:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:30.850 09:14:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.850 ************************************ 00:03:30.850 START TEST skip_rpc_with_delay 00:03:30.850 ************************************ 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:30.850 [2024-12-13 09:14:43.096902] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:30.850 00:03:30.850 real 0m0.064s 00:03:30.850 user 0m0.039s 00:03:30.850 sys 0m0.024s 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.850 09:14:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:30.850 ************************************ 00:03:30.850 END TEST skip_rpc_with_delay 00:03:30.851 ************************************ 00:03:30.851 09:14:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:30.851 09:14:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:30.851 09:14:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:30.851 09:14:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:30.851 09:14:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:30.851 09:14:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.851 ************************************ 00:03:30.851 START TEST exit_on_failed_rpc_init 00:03:30.851 ************************************ 00:03:30.851 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:30.851 09:14:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3133859 00:03:30.851 09:14:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:30.851 09:14:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3133859 00:03:30.851 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3133859 ']' 00:03:30.851 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:30.851 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:30.851 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:30.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:30.851 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:30.851 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:31.109 [2024-12-13 09:14:43.228288] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:31.109 [2024-12-13 09:14:43.228329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133859 ] 00:03:31.109 [2024-12-13 09:14:43.286250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:31.109 [2024-12-13 09:14:43.327949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:31.367 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:31.367 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:31.367 09:14:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:31.367 09:14:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:31.367 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:31.367 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:31.367 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:31.367 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:31.367 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:31.367 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:31.367 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:31.367 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:31.367 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:31.367 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:31.367 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:31.367 [2024-12-13 09:14:43.590604] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:31.367 [2024-12-13 09:14:43.590651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133875 ] 00:03:31.367 [2024-12-13 09:14:43.652025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:31.367 [2024-12-13 09:14:43.690918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:31.367 [2024-12-13 09:14:43.690987] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:31.367 [2024-12-13 09:14:43.690997] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:31.367 [2024-12-13 09:14:43.691003] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3133859 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3133859 ']' 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3133859 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3133859 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3133859' 00:03:31.626 killing process with pid 3133859 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3133859 00:03:31.626 09:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3133859 00:03:31.885 00:03:31.885 real 0m0.900s 00:03:31.885 user 0m0.964s 00:03:31.885 sys 0m0.360s 00:03:31.885 09:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.885 09:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:31.885 ************************************ 00:03:31.885 END TEST exit_on_failed_rpc_init 00:03:31.885 ************************************ 00:03:31.885 09:14:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:31.885 00:03:31.885 real 0m13.035s 00:03:31.885 user 0m12.348s 00:03:31.885 sys 0m1.464s 00:03:31.885 09:14:44 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.885 09:14:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.885 ************************************ 00:03:31.885 END TEST skip_rpc 00:03:31.885 ************************************ 00:03:31.885 09:14:44 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:31.885 09:14:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.885 09:14:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.885 09:14:44 -- common/autotest_common.sh@10 -- # set +x 00:03:31.885 ************************************ 00:03:31.885 START TEST rpc_client 00:03:31.885 ************************************ 00:03:31.885 09:14:44 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:32.144 * Looking for test storage... 00:03:32.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:32.144 09:14:44 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:32.144 09:14:44 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:03:32.144 09:14:44 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:32.144 09:14:44 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:32.144 09:14:44 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:32.144 09:14:44 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:32.144 09:14:44 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:32.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.144 --rc genhtml_branch_coverage=1 00:03:32.144 --rc genhtml_function_coverage=1 00:03:32.144 --rc genhtml_legend=1 00:03:32.144 --rc geninfo_all_blocks=1 00:03:32.144 --rc geninfo_unexecuted_blocks=1 00:03:32.144 00:03:32.144 ' 00:03:32.144 09:14:44 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:32.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.144 --rc genhtml_branch_coverage=1 00:03:32.144 --rc genhtml_function_coverage=1 00:03:32.144 --rc genhtml_legend=1 00:03:32.144 --rc geninfo_all_blocks=1 00:03:32.144 --rc geninfo_unexecuted_blocks=1 00:03:32.144 00:03:32.144 ' 00:03:32.144 09:14:44 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:32.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.144 --rc genhtml_branch_coverage=1 00:03:32.144 --rc genhtml_function_coverage=1 00:03:32.144 --rc genhtml_legend=1 00:03:32.144 --rc geninfo_all_blocks=1 00:03:32.144 --rc geninfo_unexecuted_blocks=1 00:03:32.144 00:03:32.144 ' 00:03:32.144 09:14:44 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:32.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.144 --rc genhtml_branch_coverage=1 00:03:32.144 --rc genhtml_function_coverage=1 00:03:32.144 --rc genhtml_legend=1 00:03:32.144 --rc geninfo_all_blocks=1 00:03:32.144 --rc geninfo_unexecuted_blocks=1 00:03:32.144 00:03:32.144 ' 00:03:32.144 09:14:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:32.144 OK 00:03:32.144 09:14:44 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:32.144 00:03:32.144 real 0m0.206s 00:03:32.144 user 0m0.124s 00:03:32.144 sys 0m0.095s 00:03:32.144 09:14:44 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.144 09:14:44 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:32.144 ************************************ 00:03:32.144 END TEST rpc_client 00:03:32.144 ************************************ 00:03:32.144 09:14:44 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:32.144 09:14:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:32.144 09:14:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.144 09:14:44 -- common/autotest_common.sh@10 -- # set +x 00:03:32.144 ************************************ 00:03:32.144 START TEST json_config 00:03:32.144 ************************************ 00:03:32.144 09:14:44 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:32.403 09:14:44 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:32.403 09:14:44 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:03:32.403 09:14:44 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:32.403 09:14:44 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:32.403 09:14:44 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:32.403 09:14:44 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:32.403 09:14:44 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:32.403 09:14:44 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:32.403 09:14:44 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:32.403 09:14:44 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:32.403 09:14:44 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:32.403 09:14:44 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:32.403 09:14:44 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:32.404 09:14:44 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:32.404 09:14:44 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:32.404 09:14:44 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:32.404 09:14:44 json_config -- scripts/common.sh@345 -- # : 1 00:03:32.404 09:14:44 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:32.404 09:14:44 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:32.404 09:14:44 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:32.404 09:14:44 json_config -- scripts/common.sh@353 -- # local d=1 00:03:32.404 09:14:44 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:32.404 09:14:44 json_config -- scripts/common.sh@355 -- # echo 1 00:03:32.404 09:14:44 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:32.404 09:14:44 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:32.404 09:14:44 json_config -- scripts/common.sh@353 -- # local d=2 00:03:32.404 09:14:44 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:32.404 09:14:44 json_config -- scripts/common.sh@355 -- # echo 2 00:03:32.404 09:14:44 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:32.404 09:14:44 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:32.404 09:14:44 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:32.404 09:14:44 json_config -- scripts/common.sh@368 -- # return 0 00:03:32.404 09:14:44 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:32.404 09:14:44 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:32.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.404 --rc genhtml_branch_coverage=1 00:03:32.404 --rc genhtml_function_coverage=1 00:03:32.404 --rc genhtml_legend=1 00:03:32.404 --rc geninfo_all_blocks=1 00:03:32.404 --rc geninfo_unexecuted_blocks=1 00:03:32.404 00:03:32.404 ' 00:03:32.404 09:14:44 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:32.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.404 --rc genhtml_branch_coverage=1 00:03:32.404 --rc genhtml_function_coverage=1 00:03:32.404 --rc genhtml_legend=1 00:03:32.404 --rc geninfo_all_blocks=1 00:03:32.404 --rc geninfo_unexecuted_blocks=1 00:03:32.404 00:03:32.404 ' 00:03:32.404 09:14:44 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:32.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.404 --rc genhtml_branch_coverage=1 00:03:32.404 --rc genhtml_function_coverage=1 00:03:32.404 --rc genhtml_legend=1 00:03:32.404 --rc geninfo_all_blocks=1 00:03:32.404 --rc geninfo_unexecuted_blocks=1 00:03:32.404 00:03:32.404 ' 00:03:32.404 09:14:44 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:32.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.404 --rc genhtml_branch_coverage=1 00:03:32.404 --rc genhtml_function_coverage=1 00:03:32.404 --rc genhtml_legend=1 00:03:32.404 --rc geninfo_all_blocks=1 00:03:32.404 --rc geninfo_unexecuted_blocks=1 00:03:32.404 00:03:32.404 ' 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:32.404 09:14:44 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:32.404 09:14:44 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:32.404 09:14:44 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:32.404 09:14:44 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:32.404 09:14:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.404 09:14:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.404 09:14:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.404 09:14:44 json_config -- paths/export.sh@5 -- # export PATH 00:03:32.404 09:14:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@51 -- # : 0 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:32.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:32.404 09:14:44 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:32.404 INFO: JSON configuration test init 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:32.404 09:14:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:32.404 09:14:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:32.404 09:14:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:32.404 09:14:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:32.404 09:14:44 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:32.404 09:14:44 json_config -- json_config/common.sh@9 -- # local app=target 00:03:32.404 09:14:44 json_config -- json_config/common.sh@10 -- # shift 00:03:32.404 09:14:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:32.404 09:14:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:32.404 09:14:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:32.404 09:14:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:32.404 09:14:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:32.404 09:14:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3134222 00:03:32.404 09:14:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:32.404 Waiting for target to run... 00:03:32.405 09:14:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:32.405 09:14:44 json_config -- json_config/common.sh@25 -- # waitforlisten 3134222 /var/tmp/spdk_tgt.sock 00:03:32.405 09:14:44 json_config -- common/autotest_common.sh@835 -- # '[' -z 3134222 ']' 00:03:32.405 09:14:44 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:32.405 09:14:44 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:32.405 09:14:44 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:32.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:32.405 09:14:44 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:32.405 09:14:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:32.405 [2024-12-13 09:14:44.694834] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:32.405 [2024-12-13 09:14:44.694884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134222 ] 00:03:32.971 [2024-12-13 09:14:45.141309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:32.971 [2024-12-13 09:14:45.193121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:33.228 09:14:45 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:33.228 09:14:45 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:33.228 09:14:45 json_config -- json_config/common.sh@26 -- # echo '' 00:03:33.228 00:03:33.228 09:14:45 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:33.228 09:14:45 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:33.228 09:14:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:33.228 09:14:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:33.228 09:14:45 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:33.228 09:14:45 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:33.228 09:14:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:33.228 09:14:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:33.228 09:14:45 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:33.228 09:14:45 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:33.228 09:14:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:36.512 09:14:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:36.512 09:14:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:36.512 09:14:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@54 -- # sort 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:36.512 09:14:48 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:36.512 09:14:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:36.512 09:14:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:36.769 09:14:48 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:36.769 09:14:48 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:36.769 09:14:48 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:36.769 09:14:48 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:36.769 09:14:48 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:36.769 09:14:48 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:36.769 09:14:48 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:36.769 09:14:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:36.769 09:14:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:36.769 09:14:48 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:36.769 09:14:48 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:36.769 09:14:48 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:36.769 09:14:48 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:36.769 09:14:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:36.769 MallocForNvmf0 00:03:36.769 09:14:49 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:36.769 09:14:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:37.027 MallocForNvmf1 00:03:37.027 09:14:49 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:37.027 09:14:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:37.285 [2024-12-13 09:14:49.424944] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:37.285 09:14:49 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:37.285 09:14:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:37.285 09:14:49 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:37.285 09:14:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:37.543 09:14:49 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:37.543 09:14:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:37.800 09:14:49 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:37.800 09:14:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:37.800 [2024-12-13 09:14:50.151215] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:37.800 09:14:50 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:37.800 09:14:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:37.800 09:14:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:38.058 09:14:50 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:38.058 09:14:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:38.058 09:14:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:38.058 09:14:50 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:38.058 09:14:50 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:38.058 09:14:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:38.058 MallocBdevForConfigChangeCheck 00:03:38.058 09:14:50 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:38.058 09:14:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:38.058 09:14:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:38.315 09:14:50 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:38.315 09:14:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:38.572 09:14:50 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:38.572 INFO: shutting down applications... 00:03:38.572 09:14:50 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:38.572 09:14:50 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:38.572 09:14:50 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:38.572 09:14:50 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:40.469 Calling clear_iscsi_subsystem 00:03:40.469 Calling clear_nvmf_subsystem 00:03:40.469 Calling clear_nbd_subsystem 00:03:40.469 Calling clear_ublk_subsystem 00:03:40.469 Calling clear_vhost_blk_subsystem 00:03:40.469 Calling clear_vhost_scsi_subsystem 00:03:40.469 Calling clear_bdev_subsystem 00:03:40.469 09:14:52 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:40.469 09:14:52 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:40.469 09:14:52 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:40.469 09:14:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:40.469 09:14:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:40.469 09:14:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:40.469 09:14:52 json_config -- json_config/json_config.sh@352 -- # break 00:03:40.469 09:14:52 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:40.469 09:14:52 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:40.469 09:14:52 json_config -- json_config/common.sh@31 -- # local app=target 00:03:40.469 09:14:52 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:40.469 09:14:52 json_config -- json_config/common.sh@35 -- # [[ -n 3134222 ]] 00:03:40.469 09:14:52 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3134222 00:03:40.469 09:14:52 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:40.469 09:14:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:40.469 09:14:52 json_config -- json_config/common.sh@41 -- # kill -0 3134222 00:03:40.469 09:14:52 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:41.035 09:14:53 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:41.035 09:14:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:41.035 09:14:53 json_config -- json_config/common.sh@41 -- # kill -0 3134222 00:03:41.035 09:14:53 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:41.035 09:14:53 json_config -- json_config/common.sh@43 -- # break 00:03:41.035 09:14:53 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:41.035 09:14:53 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:41.035 SPDK target shutdown done 00:03:41.035 09:14:53 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:41.035 INFO: relaunching applications... 00:03:41.035 09:14:53 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:41.035 09:14:53 json_config -- json_config/common.sh@9 -- # local app=target 00:03:41.035 09:14:53 json_config -- json_config/common.sh@10 -- # shift 00:03:41.035 09:14:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:41.035 09:14:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:41.035 09:14:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:41.035 09:14:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:41.035 09:14:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:41.035 09:14:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3135697 00:03:41.035 09:14:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:41.035 Waiting for target to run... 00:03:41.035 09:14:53 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:41.035 09:14:53 json_config -- json_config/common.sh@25 -- # waitforlisten 3135697 /var/tmp/spdk_tgt.sock 00:03:41.035 09:14:53 json_config -- common/autotest_common.sh@835 -- # '[' -z 3135697 ']' 00:03:41.035 09:14:53 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:41.035 09:14:53 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:41.035 09:14:53 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:41.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:41.035 09:14:53 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:41.035 09:14:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.035 [2024-12-13 09:14:53.257610] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:41.035 [2024-12-13 09:14:53.257664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3135697 ] 00:03:41.601 [2024-12-13 09:14:53.703585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.601 [2024-12-13 09:14:53.755072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:44.882 [2024-12-13 09:14:56.787245] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:44.882 [2024-12-13 09:14:56.819531] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:45.139 09:14:57 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:45.139 09:14:57 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:45.139 09:14:57 json_config -- json_config/common.sh@26 -- # echo '' 00:03:45.139 00:03:45.139 09:14:57 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:45.139 09:14:57 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:45.139 INFO: Checking if target configuration is the same... 00:03:45.139 09:14:57 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:45.139 09:14:57 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:45.139 09:14:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:45.139 + '[' 2 -ne 2 ']' 00:03:45.139 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:45.139 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:45.139 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.139 +++ basename /dev/fd/62 00:03:45.139 ++ mktemp /tmp/62.XXX 00:03:45.139 + tmp_file_1=/tmp/62.DIx 00:03:45.139 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:45.139 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:45.139 + tmp_file_2=/tmp/spdk_tgt_config.json.17Z 00:03:45.139 + ret=0 00:03:45.139 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:45.706 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:45.706 + diff -u /tmp/62.DIx /tmp/spdk_tgt_config.json.17Z 00:03:45.706 + echo 'INFO: JSON config files are the same' 00:03:45.706 INFO: JSON config files are the same 00:03:45.706 + rm /tmp/62.DIx /tmp/spdk_tgt_config.json.17Z 00:03:45.706 + exit 0 00:03:45.706 09:14:57 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:45.706 09:14:57 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:45.706 INFO: changing configuration and checking if this can be detected... 00:03:45.706 09:14:57 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:45.706 09:14:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:45.706 09:14:58 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:45.706 09:14:58 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:45.706 09:14:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:45.706 + '[' 2 -ne 2 ']' 00:03:45.706 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:45.706 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:45.706 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.706 +++ basename /dev/fd/62 00:03:45.706 ++ mktemp /tmp/62.XXX 00:03:45.706 + tmp_file_1=/tmp/62.1lS 00:03:45.706 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:45.706 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:45.706 + tmp_file_2=/tmp/spdk_tgt_config.json.msL 00:03:45.706 + ret=0 00:03:45.706 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:46.273 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:46.273 + diff -u /tmp/62.1lS /tmp/spdk_tgt_config.json.msL 00:03:46.273 + ret=1 00:03:46.273 + echo '=== Start of file: /tmp/62.1lS ===' 00:03:46.273 + cat /tmp/62.1lS 00:03:46.273 + echo '=== End of file: /tmp/62.1lS ===' 00:03:46.273 + echo '' 00:03:46.273 + echo '=== Start of file: /tmp/spdk_tgt_config.json.msL ===' 00:03:46.273 + cat /tmp/spdk_tgt_config.json.msL 00:03:46.273 + echo '=== End of file: /tmp/spdk_tgt_config.json.msL ===' 00:03:46.273 + echo '' 00:03:46.273 + rm /tmp/62.1lS /tmp/spdk_tgt_config.json.msL 00:03:46.273 + exit 1 00:03:46.273 09:14:58 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:46.273 INFO: configuration change detected. 00:03:46.273 09:14:58 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:46.273 09:14:58 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.273 09:14:58 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:46.273 09:14:58 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:46.273 09:14:58 json_config -- json_config/json_config.sh@324 -- # [[ -n 3135697 ]] 00:03:46.273 09:14:58 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:46.273 09:14:58 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.273 09:14:58 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:46.273 09:14:58 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:46.273 09:14:58 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:46.273 09:14:58 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:46.273 09:14:58 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:46.273 09:14:58 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.273 09:14:58 json_config -- json_config/json_config.sh@330 -- # killprocess 3135697 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@954 -- # '[' -z 3135697 ']' 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@958 -- # kill -0 3135697 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@959 -- # uname 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3135697 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3135697' 00:03:46.273 killing process with pid 3135697 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@973 -- # kill 3135697 00:03:46.273 09:14:58 json_config -- common/autotest_common.sh@978 -- # wait 3135697 00:03:47.649 09:14:59 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:47.649 09:14:59 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:47.649 09:14:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:47.649 09:14:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.649 09:15:00 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:47.649 09:15:00 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:47.649 INFO: Success 00:03:47.908 00:03:47.908 real 0m15.568s 00:03:47.908 user 0m15.770s 00:03:47.908 sys 0m2.751s 00:03:47.908 09:15:00 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.908 09:15:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.908 ************************************ 00:03:47.908 END TEST json_config 00:03:47.908 ************************************ 00:03:47.908 09:15:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:47.908 09:15:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.908 09:15:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.908 09:15:00 -- common/autotest_common.sh@10 -- # set +x 00:03:47.908 ************************************ 00:03:47.908 START TEST json_config_extra_key 00:03:47.908 ************************************ 00:03:47.908 09:15:00 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:47.908 09:15:00 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:47.908 09:15:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:03:47.908 09:15:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:47.908 09:15:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:47.908 09:15:00 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:47.908 09:15:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:47.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.908 --rc genhtml_branch_coverage=1 00:03:47.908 --rc genhtml_function_coverage=1 00:03:47.908 --rc genhtml_legend=1 00:03:47.908 --rc geninfo_all_blocks=1 00:03:47.908 --rc geninfo_unexecuted_blocks=1 00:03:47.908 00:03:47.908 ' 00:03:47.908 09:15:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:47.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.908 --rc genhtml_branch_coverage=1 00:03:47.908 --rc genhtml_function_coverage=1 00:03:47.908 --rc genhtml_legend=1 00:03:47.908 --rc geninfo_all_blocks=1 00:03:47.908 --rc geninfo_unexecuted_blocks=1 00:03:47.908 00:03:47.908 ' 00:03:47.908 09:15:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:47.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.908 --rc genhtml_branch_coverage=1 00:03:47.908 --rc genhtml_function_coverage=1 00:03:47.908 --rc genhtml_legend=1 00:03:47.908 --rc geninfo_all_blocks=1 00:03:47.908 --rc geninfo_unexecuted_blocks=1 00:03:47.908 00:03:47.908 ' 00:03:47.908 09:15:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:47.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.908 --rc genhtml_branch_coverage=1 00:03:47.908 --rc genhtml_function_coverage=1 00:03:47.908 --rc genhtml_legend=1 00:03:47.908 --rc geninfo_all_blocks=1 00:03:47.908 --rc geninfo_unexecuted_blocks=1 00:03:47.908 00:03:47.908 ' 00:03:47.908 09:15:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:47.908 09:15:00 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:47.908 09:15:00 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:47.909 09:15:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.909 09:15:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.909 09:15:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.909 09:15:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:47.909 09:15:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.909 09:15:00 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:47.909 09:15:00 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:47.909 09:15:00 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:47.909 09:15:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:47.909 09:15:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:47.909 09:15:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:47.909 09:15:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:47.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:47.909 09:15:00 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:47.909 09:15:00 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:47.909 09:15:00 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:47.909 09:15:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:47.909 09:15:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:47.909 09:15:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:47.909 09:15:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:47.909 09:15:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:47.909 09:15:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:47.909 09:15:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:47.909 09:15:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:47.909 09:15:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:47.909 09:15:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:47.909 09:15:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:47.909 INFO: launching applications... 00:03:47.909 09:15:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:47.909 09:15:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:47.909 09:15:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:47.909 09:15:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:47.909 09:15:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:47.909 09:15:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:47.909 09:15:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:47.909 09:15:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:47.909 09:15:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3136974 00:03:47.909 09:15:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:47.909 Waiting for target to run... 00:03:47.909 09:15:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3136974 /var/tmp/spdk_tgt.sock 00:03:47.909 09:15:00 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3136974 ']' 00:03:47.909 09:15:00 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:47.909 09:15:00 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:47.909 09:15:00 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:47.909 09:15:00 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:47.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:47.909 09:15:00 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:47.909 09:15:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:48.168 [2024-12-13 09:15:00.320161] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:48.168 [2024-12-13 09:15:00.320212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3136974 ] 00:03:48.427 [2024-12-13 09:15:00.768433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.685 [2024-12-13 09:15:00.823537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.944 09:15:01 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:48.944 09:15:01 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:03:48.944 09:15:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:48.944 00:03:48.944 09:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:48.944 INFO: shutting down applications... 00:03:48.944 09:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:48.944 09:15:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:48.944 09:15:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:48.944 09:15:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3136974 ]] 00:03:48.944 09:15:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3136974 00:03:48.944 09:15:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:48.944 09:15:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:48.944 09:15:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3136974 00:03:48.944 09:15:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:49.510 09:15:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:49.510 09:15:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:49.510 09:15:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3136974 00:03:49.510 09:15:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:49.510 09:15:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:49.510 09:15:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:49.511 09:15:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:49.511 SPDK target shutdown done 00:03:49.511 09:15:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:49.511 Success 00:03:49.511 00:03:49.511 real 0m1.572s 00:03:49.511 user 0m1.202s 00:03:49.511 sys 0m0.554s 00:03:49.511 09:15:01 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.511 09:15:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:49.511 ************************************ 00:03:49.511 END TEST json_config_extra_key 00:03:49.511 ************************************ 00:03:49.511 09:15:01 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:49.511 09:15:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.511 09:15:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.511 09:15:01 -- common/autotest_common.sh@10 -- # set +x 00:03:49.511 ************************************ 00:03:49.511 START TEST alias_rpc 00:03:49.511 ************************************ 00:03:49.511 09:15:01 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:49.511 * Looking for test storage... 00:03:49.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:49.511 09:15:01 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:49.511 09:15:01 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:49.511 09:15:01 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:49.769 09:15:01 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:49.769 09:15:01 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.769 09:15:01 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.769 09:15:01 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.769 09:15:01 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.770 09:15:01 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:49.770 09:15:01 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.770 09:15:01 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:49.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.770 --rc genhtml_branch_coverage=1 00:03:49.770 --rc genhtml_function_coverage=1 00:03:49.770 --rc genhtml_legend=1 00:03:49.770 --rc geninfo_all_blocks=1 00:03:49.770 --rc geninfo_unexecuted_blocks=1 00:03:49.770 00:03:49.770 ' 00:03:49.770 09:15:01 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:49.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.770 --rc genhtml_branch_coverage=1 00:03:49.770 --rc genhtml_function_coverage=1 00:03:49.770 --rc genhtml_legend=1 00:03:49.770 --rc geninfo_all_blocks=1 00:03:49.770 --rc geninfo_unexecuted_blocks=1 00:03:49.770 00:03:49.770 ' 00:03:49.770 09:15:01 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:49.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.770 --rc genhtml_branch_coverage=1 00:03:49.770 --rc genhtml_function_coverage=1 00:03:49.770 --rc genhtml_legend=1 00:03:49.770 --rc geninfo_all_blocks=1 00:03:49.770 --rc geninfo_unexecuted_blocks=1 00:03:49.770 00:03:49.770 ' 00:03:49.770 09:15:01 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:49.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.770 --rc genhtml_branch_coverage=1 00:03:49.770 --rc genhtml_function_coverage=1 00:03:49.770 --rc genhtml_legend=1 00:03:49.770 --rc geninfo_all_blocks=1 00:03:49.770 --rc geninfo_unexecuted_blocks=1 00:03:49.770 00:03:49.770 ' 00:03:49.770 09:15:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:49.770 09:15:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3137546 00:03:49.770 09:15:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3137546 00:03:49.770 09:15:01 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3137546 ']' 00:03:49.770 09:15:01 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:49.770 09:15:01 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:49.770 09:15:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:49.770 09:15:01 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:49.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:49.770 09:15:01 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:49.770 09:15:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.770 [2024-12-13 09:15:01.951142] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:49.770 [2024-12-13 09:15:01.951191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137546 ] 00:03:49.770 [2024-12-13 09:15:02.014476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.770 [2024-12-13 09:15:02.056211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.029 09:15:02 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:50.029 09:15:02 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:03:50.029 09:15:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:50.286 09:15:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3137546 00:03:50.286 09:15:02 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3137546 ']' 00:03:50.286 09:15:02 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3137546 00:03:50.286 09:15:02 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:03:50.286 09:15:02 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:50.286 09:15:02 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3137546 00:03:50.286 09:15:02 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:50.286 09:15:02 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:50.286 09:15:02 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3137546' 00:03:50.286 killing process with pid 3137546 00:03:50.286 09:15:02 alias_rpc -- common/autotest_common.sh@973 -- # kill 3137546 00:03:50.286 09:15:02 alias_rpc -- common/autotest_common.sh@978 -- # wait 3137546 00:03:50.545 00:03:50.545 real 0m1.126s 00:03:50.545 user 0m1.187s 00:03:50.545 sys 0m0.381s 00:03:50.545 09:15:02 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.545 09:15:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.545 ************************************ 00:03:50.545 END TEST alias_rpc 00:03:50.545 ************************************ 00:03:50.545 09:15:02 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:03:50.545 09:15:02 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:50.545 09:15:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.545 09:15:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.545 09:15:02 -- common/autotest_common.sh@10 -- # set +x 00:03:50.804 ************************************ 00:03:50.804 START TEST spdkcli_tcp 00:03:50.804 ************************************ 00:03:50.804 09:15:02 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:50.804 * Looking for test storage... 00:03:50.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.804 09:15:03 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:50.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.804 --rc genhtml_branch_coverage=1 00:03:50.804 --rc genhtml_function_coverage=1 00:03:50.804 --rc genhtml_legend=1 00:03:50.804 --rc geninfo_all_blocks=1 00:03:50.804 --rc geninfo_unexecuted_blocks=1 00:03:50.804 00:03:50.804 ' 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:50.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.804 --rc genhtml_branch_coverage=1 00:03:50.804 --rc genhtml_function_coverage=1 00:03:50.804 --rc genhtml_legend=1 00:03:50.804 --rc geninfo_all_blocks=1 00:03:50.804 --rc geninfo_unexecuted_blocks=1 00:03:50.804 00:03:50.804 ' 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:50.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.804 --rc genhtml_branch_coverage=1 00:03:50.804 --rc genhtml_function_coverage=1 00:03:50.804 --rc genhtml_legend=1 00:03:50.804 --rc geninfo_all_blocks=1 00:03:50.804 --rc geninfo_unexecuted_blocks=1 00:03:50.804 00:03:50.804 ' 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:50.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.804 --rc genhtml_branch_coverage=1 00:03:50.804 --rc genhtml_function_coverage=1 00:03:50.804 --rc genhtml_legend=1 00:03:50.804 --rc geninfo_all_blocks=1 00:03:50.804 --rc geninfo_unexecuted_blocks=1 00:03:50.804 00:03:50.804 ' 00:03:50.804 09:15:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:50.804 09:15:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:50.804 09:15:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:50.804 09:15:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:50.804 09:15:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:50.804 09:15:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:50.804 09:15:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:50.804 09:15:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3137768 00:03:50.804 09:15:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3137768 00:03:50.804 09:15:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3137768 ']' 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:50.804 09:15:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:50.804 [2024-12-13 09:15:03.154285] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:50.804 [2024-12-13 09:15:03.154335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137768 ] 00:03:51.063 [2024-12-13 09:15:03.218815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:51.063 [2024-12-13 09:15:03.261688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:51.063 [2024-12-13 09:15:03.261691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.322 09:15:03 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:51.322 09:15:03 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:03:51.322 09:15:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3137854 00:03:51.322 09:15:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:51.322 09:15:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:51.322 [ 00:03:51.322 "bdev_malloc_delete", 00:03:51.322 "bdev_malloc_create", 00:03:51.322 "bdev_null_resize", 00:03:51.322 "bdev_null_delete", 00:03:51.322 "bdev_null_create", 00:03:51.322 "bdev_nvme_cuse_unregister", 00:03:51.322 "bdev_nvme_cuse_register", 00:03:51.322 "bdev_opal_new_user", 00:03:51.322 "bdev_opal_set_lock_state", 00:03:51.322 "bdev_opal_delete", 00:03:51.322 "bdev_opal_get_info", 00:03:51.322 "bdev_opal_create", 00:03:51.322 "bdev_nvme_opal_revert", 00:03:51.322 "bdev_nvme_opal_init", 00:03:51.322 "bdev_nvme_send_cmd", 00:03:51.322 "bdev_nvme_set_keys", 00:03:51.322 "bdev_nvme_get_path_iostat", 00:03:51.322 "bdev_nvme_get_mdns_discovery_info", 00:03:51.322 "bdev_nvme_stop_mdns_discovery", 00:03:51.322 "bdev_nvme_start_mdns_discovery", 00:03:51.322 "bdev_nvme_set_multipath_policy", 00:03:51.322 "bdev_nvme_set_preferred_path", 00:03:51.322 "bdev_nvme_get_io_paths", 00:03:51.322 "bdev_nvme_remove_error_injection", 00:03:51.323 "bdev_nvme_add_error_injection", 00:03:51.323 "bdev_nvme_get_discovery_info", 00:03:51.323 "bdev_nvme_stop_discovery", 00:03:51.323 "bdev_nvme_start_discovery", 00:03:51.323 "bdev_nvme_get_controller_health_info", 00:03:51.323 "bdev_nvme_disable_controller", 00:03:51.323 "bdev_nvme_enable_controller", 00:03:51.323 "bdev_nvme_reset_controller", 00:03:51.323 "bdev_nvme_get_transport_statistics", 00:03:51.323 "bdev_nvme_apply_firmware", 00:03:51.323 "bdev_nvme_detach_controller", 00:03:51.323 "bdev_nvme_get_controllers", 00:03:51.323 "bdev_nvme_attach_controller", 00:03:51.323 "bdev_nvme_set_hotplug", 00:03:51.323 "bdev_nvme_set_options", 00:03:51.323 "bdev_passthru_delete", 00:03:51.323 "bdev_passthru_create", 00:03:51.323 "bdev_lvol_set_parent_bdev", 00:03:51.323 "bdev_lvol_set_parent", 00:03:51.323 "bdev_lvol_check_shallow_copy", 00:03:51.323 "bdev_lvol_start_shallow_copy", 00:03:51.323 "bdev_lvol_grow_lvstore", 00:03:51.323 "bdev_lvol_get_lvols", 00:03:51.323 "bdev_lvol_get_lvstores", 00:03:51.323 "bdev_lvol_delete", 00:03:51.323 "bdev_lvol_set_read_only", 00:03:51.323 "bdev_lvol_resize", 00:03:51.323 "bdev_lvol_decouple_parent", 00:03:51.323 "bdev_lvol_inflate", 00:03:51.323 "bdev_lvol_rename", 00:03:51.323 "bdev_lvol_clone_bdev", 00:03:51.323 "bdev_lvol_clone", 00:03:51.323 "bdev_lvol_snapshot", 00:03:51.323 "bdev_lvol_create", 00:03:51.323 "bdev_lvol_delete_lvstore", 00:03:51.323 "bdev_lvol_rename_lvstore", 00:03:51.323 "bdev_lvol_create_lvstore", 00:03:51.323 "bdev_raid_set_options", 00:03:51.323 "bdev_raid_remove_base_bdev", 00:03:51.323 "bdev_raid_add_base_bdev", 00:03:51.323 "bdev_raid_delete", 00:03:51.323 "bdev_raid_create", 00:03:51.323 "bdev_raid_get_bdevs", 00:03:51.323 "bdev_error_inject_error", 00:03:51.323 "bdev_error_delete", 00:03:51.323 "bdev_error_create", 00:03:51.323 "bdev_split_delete", 00:03:51.323 "bdev_split_create", 00:03:51.323 "bdev_delay_delete", 00:03:51.323 "bdev_delay_create", 00:03:51.323 "bdev_delay_update_latency", 00:03:51.323 "bdev_zone_block_delete", 00:03:51.323 "bdev_zone_block_create", 00:03:51.323 "blobfs_create", 00:03:51.323 "blobfs_detect", 00:03:51.323 "blobfs_set_cache_size", 00:03:51.323 "bdev_aio_delete", 00:03:51.323 "bdev_aio_rescan", 00:03:51.323 "bdev_aio_create", 00:03:51.323 "bdev_ftl_set_property", 00:03:51.323 "bdev_ftl_get_properties", 00:03:51.323 "bdev_ftl_get_stats", 00:03:51.323 "bdev_ftl_unmap", 00:03:51.323 "bdev_ftl_unload", 00:03:51.323 "bdev_ftl_delete", 00:03:51.323 "bdev_ftl_load", 00:03:51.323 "bdev_ftl_create", 00:03:51.323 "bdev_virtio_attach_controller", 00:03:51.323 "bdev_virtio_scsi_get_devices", 00:03:51.323 "bdev_virtio_detach_controller", 00:03:51.323 "bdev_virtio_blk_set_hotplug", 00:03:51.323 "bdev_iscsi_delete", 00:03:51.323 "bdev_iscsi_create", 00:03:51.323 "bdev_iscsi_set_options", 00:03:51.323 "accel_error_inject_error", 00:03:51.323 "ioat_scan_accel_module", 00:03:51.323 "dsa_scan_accel_module", 00:03:51.323 "iaa_scan_accel_module", 00:03:51.323 "vfu_virtio_create_fs_endpoint", 00:03:51.323 "vfu_virtio_create_scsi_endpoint", 00:03:51.323 "vfu_virtio_scsi_remove_target", 00:03:51.323 "vfu_virtio_scsi_add_target", 00:03:51.323 "vfu_virtio_create_blk_endpoint", 00:03:51.323 "vfu_virtio_delete_endpoint", 00:03:51.323 "keyring_file_remove_key", 00:03:51.323 "keyring_file_add_key", 00:03:51.323 "keyring_linux_set_options", 00:03:51.323 "fsdev_aio_delete", 00:03:51.323 "fsdev_aio_create", 00:03:51.323 "iscsi_get_histogram", 00:03:51.323 "iscsi_enable_histogram", 00:03:51.323 "iscsi_set_options", 00:03:51.323 "iscsi_get_auth_groups", 00:03:51.323 "iscsi_auth_group_remove_secret", 00:03:51.323 "iscsi_auth_group_add_secret", 00:03:51.323 "iscsi_delete_auth_group", 00:03:51.323 "iscsi_create_auth_group", 00:03:51.323 "iscsi_set_discovery_auth", 00:03:51.323 "iscsi_get_options", 00:03:51.323 "iscsi_target_node_request_logout", 00:03:51.323 "iscsi_target_node_set_redirect", 00:03:51.323 "iscsi_target_node_set_auth", 00:03:51.323 "iscsi_target_node_add_lun", 00:03:51.323 "iscsi_get_stats", 00:03:51.323 "iscsi_get_connections", 00:03:51.323 "iscsi_portal_group_set_auth", 00:03:51.323 "iscsi_start_portal_group", 00:03:51.323 "iscsi_delete_portal_group", 00:03:51.323 "iscsi_create_portal_group", 00:03:51.323 "iscsi_get_portal_groups", 00:03:51.323 "iscsi_delete_target_node", 00:03:51.323 "iscsi_target_node_remove_pg_ig_maps", 00:03:51.323 "iscsi_target_node_add_pg_ig_maps", 00:03:51.323 "iscsi_create_target_node", 00:03:51.324 "iscsi_get_target_nodes", 00:03:51.324 "iscsi_delete_initiator_group", 00:03:51.324 "iscsi_initiator_group_remove_initiators", 00:03:51.324 "iscsi_initiator_group_add_initiators", 00:03:51.324 "iscsi_create_initiator_group", 00:03:51.324 "iscsi_get_initiator_groups", 00:03:51.324 "nvmf_set_crdt", 00:03:51.324 "nvmf_set_config", 00:03:51.324 "nvmf_set_max_subsystems", 00:03:51.324 "nvmf_stop_mdns_prr", 00:03:51.324 "nvmf_publish_mdns_prr", 00:03:51.324 "nvmf_subsystem_get_listeners", 00:03:51.324 "nvmf_subsystem_get_qpairs", 00:03:51.324 "nvmf_subsystem_get_controllers", 00:03:51.324 "nvmf_get_stats", 00:03:51.324 "nvmf_get_transports", 00:03:51.324 "nvmf_create_transport", 00:03:51.324 "nvmf_get_targets", 00:03:51.324 "nvmf_delete_target", 00:03:51.324 "nvmf_create_target", 00:03:51.324 "nvmf_subsystem_allow_any_host", 00:03:51.324 "nvmf_subsystem_set_keys", 00:03:51.324 "nvmf_subsystem_remove_host", 00:03:51.324 "nvmf_subsystem_add_host", 00:03:51.324 "nvmf_ns_remove_host", 00:03:51.324 "nvmf_ns_add_host", 00:03:51.324 "nvmf_subsystem_remove_ns", 00:03:51.324 "nvmf_subsystem_set_ns_ana_group", 00:03:51.324 "nvmf_subsystem_add_ns", 00:03:51.324 "nvmf_subsystem_listener_set_ana_state", 00:03:51.324 "nvmf_discovery_get_referrals", 00:03:51.324 "nvmf_discovery_remove_referral", 00:03:51.324 "nvmf_discovery_add_referral", 00:03:51.324 "nvmf_subsystem_remove_listener", 00:03:51.324 "nvmf_subsystem_add_listener", 00:03:51.324 "nvmf_delete_subsystem", 00:03:51.324 "nvmf_create_subsystem", 00:03:51.324 "nvmf_get_subsystems", 00:03:51.324 "env_dpdk_get_mem_stats", 00:03:51.324 "nbd_get_disks", 00:03:51.324 "nbd_stop_disk", 00:03:51.324 "nbd_start_disk", 00:03:51.324 "ublk_recover_disk", 00:03:51.324 "ublk_get_disks", 00:03:51.324 "ublk_stop_disk", 00:03:51.324 "ublk_start_disk", 00:03:51.324 "ublk_destroy_target", 00:03:51.324 "ublk_create_target", 00:03:51.324 "virtio_blk_create_transport", 00:03:51.324 "virtio_blk_get_transports", 00:03:51.324 "vhost_controller_set_coalescing", 00:03:51.324 "vhost_get_controllers", 00:03:51.324 "vhost_delete_controller", 00:03:51.324 "vhost_create_blk_controller", 00:03:51.324 "vhost_scsi_controller_remove_target", 00:03:51.324 "vhost_scsi_controller_add_target", 00:03:51.324 "vhost_start_scsi_controller", 00:03:51.324 "vhost_create_scsi_controller", 00:03:51.324 "thread_set_cpumask", 00:03:51.324 "scheduler_set_options", 00:03:51.324 "framework_get_governor", 00:03:51.324 "framework_get_scheduler", 00:03:51.324 "framework_set_scheduler", 00:03:51.324 "framework_get_reactors", 00:03:51.324 "thread_get_io_channels", 00:03:51.324 "thread_get_pollers", 00:03:51.324 "thread_get_stats", 00:03:51.324 "framework_monitor_context_switch", 00:03:51.324 "spdk_kill_instance", 00:03:51.324 "log_enable_timestamps", 00:03:51.324 "log_get_flags", 00:03:51.324 "log_clear_flag", 00:03:51.324 "log_set_flag", 00:03:51.324 "log_get_level", 00:03:51.324 "log_set_level", 00:03:51.324 "log_get_print_level", 00:03:51.324 "log_set_print_level", 00:03:51.324 "framework_enable_cpumask_locks", 00:03:51.324 "framework_disable_cpumask_locks", 00:03:51.324 "framework_wait_init", 00:03:51.324 "framework_start_init", 00:03:51.324 "scsi_get_devices", 00:03:51.324 "bdev_get_histogram", 00:03:51.324 "bdev_enable_histogram", 00:03:51.324 "bdev_set_qos_limit", 00:03:51.324 "bdev_set_qd_sampling_period", 00:03:51.324 "bdev_get_bdevs", 00:03:51.324 "bdev_reset_iostat", 00:03:51.324 "bdev_get_iostat", 00:03:51.324 "bdev_examine", 00:03:51.324 "bdev_wait_for_examine", 00:03:51.324 "bdev_set_options", 00:03:51.324 "accel_get_stats", 00:03:51.324 "accel_set_options", 00:03:51.324 "accel_set_driver", 00:03:51.324 "accel_crypto_key_destroy", 00:03:51.324 "accel_crypto_keys_get", 00:03:51.324 "accel_crypto_key_create", 00:03:51.324 "accel_assign_opc", 00:03:51.324 "accel_get_module_info", 00:03:51.324 "accel_get_opc_assignments", 00:03:51.324 "vmd_rescan", 00:03:51.324 "vmd_remove_device", 00:03:51.324 "vmd_enable", 00:03:51.324 "sock_get_default_impl", 00:03:51.324 "sock_set_default_impl", 00:03:51.324 "sock_impl_set_options", 00:03:51.324 "sock_impl_get_options", 00:03:51.324 "iobuf_get_stats", 00:03:51.324 "iobuf_set_options", 00:03:51.324 "keyring_get_keys", 00:03:51.324 "vfu_tgt_set_base_path", 00:03:51.324 "framework_get_pci_devices", 00:03:51.324 "framework_get_config", 00:03:51.324 "framework_get_subsystems", 00:03:51.324 "fsdev_set_opts", 00:03:51.324 "fsdev_get_opts", 00:03:51.324 "trace_get_info", 00:03:51.324 "trace_get_tpoint_group_mask", 00:03:51.324 "trace_disable_tpoint_group", 00:03:51.324 "trace_enable_tpoint_group", 00:03:51.324 "trace_clear_tpoint_mask", 00:03:51.324 "trace_set_tpoint_mask", 00:03:51.324 "notify_get_notifications", 00:03:51.324 "notify_get_types", 00:03:51.324 "spdk_get_version", 00:03:51.324 "rpc_get_methods" 00:03:51.324 ] 00:03:51.324 09:15:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:51.324 09:15:03 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:51.324 09:15:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:51.584 09:15:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:51.584 09:15:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3137768 00:03:51.584 09:15:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3137768 ']' 00:03:51.584 09:15:03 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3137768 00:03:51.584 09:15:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:03:51.584 09:15:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:51.584 09:15:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3137768 00:03:51.584 09:15:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:51.584 09:15:03 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:51.584 09:15:03 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3137768' 00:03:51.584 killing process with pid 3137768 00:03:51.584 09:15:03 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3137768 00:03:51.584 09:15:03 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3137768 00:03:51.843 00:03:51.843 real 0m1.121s 00:03:51.843 user 0m1.905s 00:03:51.843 sys 0m0.416s 00:03:51.843 09:15:04 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.843 09:15:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:51.843 ************************************ 00:03:51.843 END TEST spdkcli_tcp 00:03:51.843 ************************************ 00:03:51.843 09:15:04 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:51.843 09:15:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.843 09:15:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.843 09:15:04 -- common/autotest_common.sh@10 -- # set +x 00:03:51.843 ************************************ 00:03:51.843 START TEST dpdk_mem_utility 00:03:51.843 ************************************ 00:03:51.843 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:51.843 * Looking for test storage... 00:03:51.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:03:51.843 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:51.843 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:03:51.843 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:52.102 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.102 09:15:04 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:03:52.102 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.102 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:52.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.102 --rc genhtml_branch_coverage=1 00:03:52.102 --rc genhtml_function_coverage=1 00:03:52.102 --rc genhtml_legend=1 00:03:52.102 --rc geninfo_all_blocks=1 00:03:52.102 --rc geninfo_unexecuted_blocks=1 00:03:52.102 00:03:52.102 ' 00:03:52.102 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:52.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.102 --rc genhtml_branch_coverage=1 00:03:52.102 --rc genhtml_function_coverage=1 00:03:52.102 --rc genhtml_legend=1 00:03:52.102 --rc geninfo_all_blocks=1 00:03:52.102 --rc geninfo_unexecuted_blocks=1 00:03:52.102 00:03:52.102 ' 00:03:52.102 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:52.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.102 --rc genhtml_branch_coverage=1 00:03:52.102 --rc genhtml_function_coverage=1 00:03:52.102 --rc genhtml_legend=1 00:03:52.102 --rc geninfo_all_blocks=1 00:03:52.102 --rc geninfo_unexecuted_blocks=1 00:03:52.102 00:03:52.102 ' 00:03:52.102 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:52.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.102 --rc genhtml_branch_coverage=1 00:03:52.102 --rc genhtml_function_coverage=1 00:03:52.102 --rc genhtml_legend=1 00:03:52.102 --rc geninfo_all_blocks=1 00:03:52.102 --rc geninfo_unexecuted_blocks=1 00:03:52.102 00:03:52.102 ' 00:03:52.102 09:15:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:52.102 09:15:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3137952 00:03:52.103 09:15:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3137952 00:03:52.103 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3137952 ']' 00:03:52.103 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:52.103 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:52.103 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:52.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:52.103 09:15:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.103 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:52.103 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:52.103 [2024-12-13 09:15:04.326170] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:52.103 [2024-12-13 09:15:04.326218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137952 ] 00:03:52.103 [2024-12-13 09:15:04.389773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.103 [2024-12-13 09:15:04.431164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.362 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:52.362 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:03:52.362 09:15:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:52.362 09:15:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:52.362 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.362 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:52.362 { 00:03:52.362 "filename": "/tmp/spdk_mem_dump.txt" 00:03:52.362 } 00:03:52.362 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.362 09:15:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:52.362 DPDK memory size 818.000000 MiB in 1 heap(s) 00:03:52.362 1 heaps totaling size 818.000000 MiB 00:03:52.362 size: 818.000000 MiB heap id: 0 00:03:52.362 end heaps---------- 00:03:52.362 9 mempools totaling size 603.782043 MiB 00:03:52.362 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:03:52.362 size: 158.602051 MiB name: PDU_data_out_Pool 00:03:52.362 size: 100.555481 MiB name: bdev_io_3137952 00:03:52.362 size: 50.003479 MiB name: msgpool_3137952 00:03:52.362 size: 36.509338 MiB name: fsdev_io_3137952 00:03:52.362 size: 21.763794 MiB name: PDU_Pool 00:03:52.362 size: 19.513306 MiB name: SCSI_TASK_Pool 00:03:52.362 size: 4.133484 MiB name: evtpool_3137952 00:03:52.362 size: 0.026123 MiB name: Session_Pool 00:03:52.362 end mempools------- 00:03:52.362 6 memzones totaling size 4.142822 MiB 00:03:52.362 size: 1.000366 MiB name: RG_ring_0_3137952 00:03:52.362 size: 1.000366 MiB name: RG_ring_1_3137952 00:03:52.362 size: 1.000366 MiB name: RG_ring_4_3137952 00:03:52.362 size: 1.000366 MiB name: RG_ring_5_3137952 00:03:52.362 size: 0.125366 MiB name: RG_ring_2_3137952 00:03:52.362 size: 0.015991 MiB name: RG_ring_3_3137952 00:03:52.362 end memzones------- 00:03:52.362 09:15:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:03:52.621 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:03:52.621 list of free elements. size: 10.852478 MiB 00:03:52.621 element at address: 0x200019200000 with size: 0.999878 MiB 00:03:52.621 element at address: 0x200019400000 with size: 0.999878 MiB 00:03:52.622 element at address: 0x200000400000 with size: 0.998535 MiB 00:03:52.622 element at address: 0x200032000000 with size: 0.994446 MiB 00:03:52.622 element at address: 0x200006400000 with size: 0.959839 MiB 00:03:52.622 element at address: 0x200012c00000 with size: 0.944275 MiB 00:03:52.622 element at address: 0x200019600000 with size: 0.936584 MiB 00:03:52.622 element at address: 0x200000200000 with size: 0.717346 MiB 00:03:52.622 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:03:52.622 element at address: 0x200000c00000 with size: 0.495422 MiB 00:03:52.622 element at address: 0x20000a600000 with size: 0.490723 MiB 00:03:52.622 element at address: 0x200019800000 with size: 0.485657 MiB 00:03:52.622 element at address: 0x200003e00000 with size: 0.481934 MiB 00:03:52.622 element at address: 0x200028200000 with size: 0.410034 MiB 00:03:52.622 element at address: 0x200000800000 with size: 0.355042 MiB 00:03:52.622 list of standard malloc elements. size: 199.218628 MiB 00:03:52.622 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:03:52.622 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:03:52.622 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:03:52.622 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:03:52.622 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:03:52.622 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:03:52.622 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:03:52.622 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:03:52.622 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:03:52.622 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:03:52.622 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:03:52.622 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:03:52.622 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:03:52.622 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:03:52.622 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:03:52.622 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:03:52.622 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:03:52.622 element at address: 0x20000085b040 with size: 0.000183 MiB 00:03:52.622 element at address: 0x20000085f300 with size: 0.000183 MiB 00:03:52.622 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:03:52.622 element at address: 0x20000087f680 with size: 0.000183 MiB 00:03:52.622 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:03:52.622 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:03:52.622 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:03:52.622 element at address: 0x200000cff000 with size: 0.000183 MiB 00:03:52.622 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:03:52.622 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:03:52.622 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:03:52.622 element at address: 0x200003efb980 with size: 0.000183 MiB 00:03:52.622 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:03:52.622 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:03:52.622 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:03:52.622 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:03:52.622 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:03:52.622 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:03:52.622 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:03:52.622 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:03:52.622 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:03:52.622 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:03:52.622 element at address: 0x200028268f80 with size: 0.000183 MiB 00:03:52.622 element at address: 0x200028269040 with size: 0.000183 MiB 00:03:52.622 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:03:52.622 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:03:52.622 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:03:52.622 list of memzone associated elements. size: 607.928894 MiB 00:03:52.622 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:03:52.622 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:03:52.622 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:03:52.622 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:03:52.622 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:03:52.622 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3137952_0 00:03:52.622 element at address: 0x200000dff380 with size: 48.003052 MiB 00:03:52.622 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3137952_0 00:03:52.622 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:03:52.622 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3137952_0 00:03:52.622 element at address: 0x2000199be940 with size: 20.255554 MiB 00:03:52.622 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:03:52.622 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:03:52.622 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:03:52.622 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:03:52.622 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3137952_0 00:03:52.622 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:03:52.622 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3137952 00:03:52.622 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:03:52.622 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3137952 00:03:52.622 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:03:52.622 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:52.622 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:03:52.622 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:52.622 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:03:52.622 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:52.622 element at address: 0x200003efba40 with size: 1.008118 MiB 00:03:52.622 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:52.622 element at address: 0x200000cff180 with size: 1.000488 MiB 00:03:52.622 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3137952 00:03:52.622 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:03:52.622 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3137952 00:03:52.622 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:03:52.622 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3137952 00:03:52.622 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:03:52.622 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3137952 00:03:52.622 element at address: 0x20000087f740 with size: 0.500488 MiB 00:03:52.622 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3137952 00:03:52.622 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:03:52.622 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3137952 00:03:52.622 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:03:52.622 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:52.622 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:03:52.622 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:52.622 element at address: 0x20001987c540 with size: 0.250488 MiB 00:03:52.622 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:52.622 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:03:52.622 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3137952 00:03:52.622 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:03:52.622 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3137952 00:03:52.622 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:03:52.622 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:52.622 element at address: 0x200028269100 with size: 0.023743 MiB 00:03:52.622 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:52.622 element at address: 0x20000085b100 with size: 0.016113 MiB 00:03:52.622 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3137952 00:03:52.622 element at address: 0x20002826f240 with size: 0.002441 MiB 00:03:52.622 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:52.622 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:03:52.622 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3137952 00:03:52.622 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:03:52.622 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3137952 00:03:52.622 element at address: 0x20000085af00 with size: 0.000305 MiB 00:03:52.622 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3137952 00:03:52.622 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:03:52.622 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:52.622 09:15:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:52.622 09:15:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3137952 00:03:52.622 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3137952 ']' 00:03:52.622 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3137952 00:03:52.622 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:03:52.622 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:52.622 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3137952 00:03:52.622 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:52.622 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:52.622 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3137952' 00:03:52.622 killing process with pid 3137952 00:03:52.622 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3137952 00:03:52.622 09:15:04 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3137952 00:03:52.886 00:03:52.886 real 0m0.985s 00:03:52.886 user 0m0.926s 00:03:52.886 sys 0m0.396s 00:03:52.886 09:15:05 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.886 09:15:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:52.886 ************************************ 00:03:52.886 END TEST dpdk_mem_utility 00:03:52.886 ************************************ 00:03:52.886 09:15:05 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:52.886 09:15:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.886 09:15:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.886 09:15:05 -- common/autotest_common.sh@10 -- # set +x 00:03:52.886 ************************************ 00:03:52.886 START TEST event 00:03:52.886 ************************************ 00:03:52.886 09:15:05 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:52.886 * Looking for test storage... 00:03:52.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:03:52.886 09:15:05 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:52.886 09:15:05 event -- common/autotest_common.sh@1711 -- # lcov --version 00:03:52.886 09:15:05 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:53.145 09:15:05 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:53.145 09:15:05 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.145 09:15:05 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.145 09:15:05 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.145 09:15:05 event -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.145 09:15:05 event -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.145 09:15:05 event -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.145 09:15:05 event -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.145 09:15:05 event -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.145 09:15:05 event -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.145 09:15:05 event -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.145 09:15:05 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.145 09:15:05 event -- scripts/common.sh@344 -- # case "$op" in 00:03:53.145 09:15:05 event -- scripts/common.sh@345 -- # : 1 00:03:53.145 09:15:05 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.145 09:15:05 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.145 09:15:05 event -- scripts/common.sh@365 -- # decimal 1 00:03:53.145 09:15:05 event -- scripts/common.sh@353 -- # local d=1 00:03:53.145 09:15:05 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.145 09:15:05 event -- scripts/common.sh@355 -- # echo 1 00:03:53.145 09:15:05 event -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.145 09:15:05 event -- scripts/common.sh@366 -- # decimal 2 00:03:53.145 09:15:05 event -- scripts/common.sh@353 -- # local d=2 00:03:53.145 09:15:05 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.145 09:15:05 event -- scripts/common.sh@355 -- # echo 2 00:03:53.145 09:15:05 event -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.145 09:15:05 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.145 09:15:05 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.145 09:15:05 event -- scripts/common.sh@368 -- # return 0 00:03:53.145 09:15:05 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.145 09:15:05 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:53.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.145 --rc genhtml_branch_coverage=1 00:03:53.145 --rc genhtml_function_coverage=1 00:03:53.145 --rc genhtml_legend=1 00:03:53.145 --rc geninfo_all_blocks=1 00:03:53.145 --rc geninfo_unexecuted_blocks=1 00:03:53.145 00:03:53.145 ' 00:03:53.145 09:15:05 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:53.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.145 --rc genhtml_branch_coverage=1 00:03:53.145 --rc genhtml_function_coverage=1 00:03:53.145 --rc genhtml_legend=1 00:03:53.145 --rc geninfo_all_blocks=1 00:03:53.145 --rc geninfo_unexecuted_blocks=1 00:03:53.145 00:03:53.145 ' 00:03:53.145 09:15:05 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:53.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.145 --rc genhtml_branch_coverage=1 00:03:53.145 --rc genhtml_function_coverage=1 00:03:53.145 --rc genhtml_legend=1 00:03:53.145 --rc geninfo_all_blocks=1 00:03:53.145 --rc geninfo_unexecuted_blocks=1 00:03:53.145 00:03:53.145 ' 00:03:53.145 09:15:05 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:53.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.145 --rc genhtml_branch_coverage=1 00:03:53.145 --rc genhtml_function_coverage=1 00:03:53.145 --rc genhtml_legend=1 00:03:53.145 --rc geninfo_all_blocks=1 00:03:53.145 --rc geninfo_unexecuted_blocks=1 00:03:53.145 00:03:53.145 ' 00:03:53.145 09:15:05 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:03:53.145 09:15:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:03:53.145 09:15:05 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:53.145 09:15:05 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:03:53.145 09:15:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.145 09:15:05 event -- common/autotest_common.sh@10 -- # set +x 00:03:53.145 ************************************ 00:03:53.145 START TEST event_perf 00:03:53.145 ************************************ 00:03:53.145 09:15:05 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:53.145 Running I/O for 1 seconds...[2024-12-13 09:15:05.380880] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:53.145 [2024-12-13 09:15:05.380948] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3138226 ] 00:03:53.145 [2024-12-13 09:15:05.447315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:53.145 [2024-12-13 09:15:05.490535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:53.145 [2024-12-13 09:15:05.490630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:53.145 [2024-12-13 09:15:05.490727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:53.145 [2024-12-13 09:15:05.490729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.522 Running I/O for 1 seconds... 00:03:54.523 lcore 0: 210446 00:03:54.523 lcore 1: 210445 00:03:54.523 lcore 2: 210445 00:03:54.523 lcore 3: 210447 00:03:54.523 done. 00:03:54.523 00:03:54.523 real 0m1.172s 00:03:54.523 user 0m4.094s 00:03:54.523 sys 0m0.075s 00:03:54.523 09:15:06 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.523 09:15:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:03:54.523 ************************************ 00:03:54.523 END TEST event_perf 00:03:54.523 ************************************ 00:03:54.523 09:15:06 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:54.523 09:15:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:03:54.523 09:15:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.523 09:15:06 event -- common/autotest_common.sh@10 -- # set +x 00:03:54.523 ************************************ 00:03:54.523 START TEST event_reactor 00:03:54.523 ************************************ 00:03:54.523 09:15:06 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:54.523 [2024-12-13 09:15:06.624793] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:54.523 [2024-12-13 09:15:06.624865] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3138471 ] 00:03:54.523 [2024-12-13 09:15:06.692503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.523 [2024-12-13 09:15:06.733015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.472 test_start 00:03:55.472 oneshot 00:03:55.472 tick 100 00:03:55.472 tick 100 00:03:55.472 tick 250 00:03:55.472 tick 100 00:03:55.472 tick 100 00:03:55.472 tick 250 00:03:55.472 tick 100 00:03:55.472 tick 500 00:03:55.472 tick 100 00:03:55.472 tick 100 00:03:55.472 tick 250 00:03:55.472 tick 100 00:03:55.472 tick 100 00:03:55.472 test_end 00:03:55.472 00:03:55.472 real 0m1.169s 00:03:55.472 user 0m1.094s 00:03:55.472 sys 0m0.071s 00:03:55.472 09:15:07 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.472 09:15:07 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:03:55.472 ************************************ 00:03:55.472 END TEST event_reactor 00:03:55.472 ************************************ 00:03:55.472 09:15:07 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:55.472 09:15:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:03:55.472 09:15:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.472 09:15:07 event -- common/autotest_common.sh@10 -- # set +x 00:03:55.731 ************************************ 00:03:55.731 START TEST event_reactor_perf 00:03:55.731 ************************************ 00:03:55.731 09:15:07 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:55.731 [2024-12-13 09:15:07.865374] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:55.731 [2024-12-13 09:15:07.865433] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3138711 ] 00:03:55.731 [2024-12-13 09:15:07.933793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.731 [2024-12-13 09:15:07.972848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.667 test_start 00:03:56.667 test_end 00:03:56.667 Performance: 509769 events per second 00:03:56.667 00:03:56.667 real 0m1.165s 00:03:56.667 user 0m1.100s 00:03:56.667 sys 0m0.061s 00:03:56.667 09:15:09 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.667 09:15:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:03:56.667 ************************************ 00:03:56.667 END TEST event_reactor_perf 00:03:56.667 ************************************ 00:03:56.926 09:15:09 event -- event/event.sh@49 -- # uname -s 00:03:56.926 09:15:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:03:56.926 09:15:09 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:56.926 09:15:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.926 09:15:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.926 09:15:09 event -- common/autotest_common.sh@10 -- # set +x 00:03:56.926 ************************************ 00:03:56.926 START TEST event_scheduler 00:03:56.926 ************************************ 00:03:56.926 09:15:09 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:56.926 * Looking for test storage... 00:03:56.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:03:56.926 09:15:09 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:56.926 09:15:09 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:03:56.926 09:15:09 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:56.926 09:15:09 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:56.926 09:15:09 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.926 09:15:09 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.926 09:15:09 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.926 09:15:09 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.927 09:15:09 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:03:56.927 09:15:09 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.927 09:15:09 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:56.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.927 --rc genhtml_branch_coverage=1 00:03:56.927 --rc genhtml_function_coverage=1 00:03:56.927 --rc genhtml_legend=1 00:03:56.927 --rc geninfo_all_blocks=1 00:03:56.927 --rc geninfo_unexecuted_blocks=1 00:03:56.927 00:03:56.927 ' 00:03:56.927 09:15:09 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:56.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.927 --rc genhtml_branch_coverage=1 00:03:56.927 --rc genhtml_function_coverage=1 00:03:56.927 --rc genhtml_legend=1 00:03:56.927 --rc geninfo_all_blocks=1 00:03:56.927 --rc geninfo_unexecuted_blocks=1 00:03:56.927 00:03:56.927 ' 00:03:56.927 09:15:09 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:56.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.927 --rc genhtml_branch_coverage=1 00:03:56.927 --rc genhtml_function_coverage=1 00:03:56.927 --rc genhtml_legend=1 00:03:56.927 --rc geninfo_all_blocks=1 00:03:56.927 --rc geninfo_unexecuted_blocks=1 00:03:56.927 00:03:56.927 ' 00:03:56.927 09:15:09 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:56.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.927 --rc genhtml_branch_coverage=1 00:03:56.927 --rc genhtml_function_coverage=1 00:03:56.927 --rc genhtml_legend=1 00:03:56.927 --rc geninfo_all_blocks=1 00:03:56.927 --rc geninfo_unexecuted_blocks=1 00:03:56.927 00:03:56.927 ' 00:03:56.927 09:15:09 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:03:56.927 09:15:09 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3139266 00:03:56.927 09:15:09 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.927 09:15:09 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3139266 00:03:56.927 09:15:09 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3139266 ']' 00:03:56.927 09:15:09 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:03:56.927 09:15:09 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.927 09:15:09 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:56.927 09:15:09 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.927 09:15:09 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:56.927 09:15:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:57.186 [2024-12-13 09:15:09.295200] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:03:57.186 [2024-12-13 09:15:09.295251] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3139266 ] 00:03:57.186 [2024-12-13 09:15:09.356957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:57.186 [2024-12-13 09:15:09.402597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.186 [2024-12-13 09:15:09.402616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:57.186 [2024-12-13 09:15:09.402684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:57.186 [2024-12-13 09:15:09.402686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:57.186 09:15:09 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:57.186 09:15:09 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:03:57.186 09:15:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:03:57.186 09:15:09 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.186 09:15:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:57.186 [2024-12-13 09:15:09.455271] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:03:57.186 [2024-12-13 09:15:09.455287] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:03:57.186 [2024-12-13 09:15:09.455296] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:03:57.186 [2024-12-13 09:15:09.455301] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:03:57.186 [2024-12-13 09:15:09.455307] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:03:57.186 09:15:09 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.186 09:15:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:03:57.186 09:15:09 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.186 09:15:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:57.186 [2024-12-13 09:15:09.530395] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:03:57.186 09:15:09 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.186 09:15:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:03:57.186 09:15:09 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.186 09:15:09 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.186 09:15:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:57.445 ************************************ 00:03:57.445 START TEST scheduler_create_thread 00:03:57.445 ************************************ 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:57.445 2 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:57.445 3 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:57.445 4 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:57.445 5 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:57.445 6 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:57.445 7 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:57.445 8 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:57.445 9 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:57.445 10 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.445 09:15:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.380 09:15:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.380 09:15:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:03:58.380 09:15:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.380 09:15:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:59.763 09:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.763 09:15:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:03:59.763 09:15:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:03:59.763 09:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.763 09:15:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.698 09:15:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.698 00:04:00.698 real 0m3.382s 00:04:00.698 user 0m0.024s 00:04:00.698 sys 0m0.005s 00:04:00.698 09:15:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.698 09:15:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.698 ************************************ 00:04:00.698 END TEST scheduler_create_thread 00:04:00.698 ************************************ 00:04:00.698 09:15:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:00.698 09:15:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3139266 00:04:00.698 09:15:12 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3139266 ']' 00:04:00.698 09:15:12 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3139266 00:04:00.698 09:15:12 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:00.698 09:15:12 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.698 09:15:12 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3139266 00:04:00.698 09:15:13 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:00.698 09:15:13 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:00.698 09:15:13 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3139266' 00:04:00.698 killing process with pid 3139266 00:04:00.698 09:15:13 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3139266 00:04:00.698 09:15:13 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3139266 00:04:01.266 [2024-12-13 09:15:13.326492] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:01.266 00:04:01.266 real 0m4.453s 00:04:01.266 user 0m7.828s 00:04:01.266 sys 0m0.355s 00:04:01.266 09:15:13 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.266 09:15:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:01.266 ************************************ 00:04:01.266 END TEST event_scheduler 00:04:01.266 ************************************ 00:04:01.266 09:15:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:01.266 09:15:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:01.266 09:15:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.266 09:15:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.266 09:15:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:01.266 ************************************ 00:04:01.266 START TEST app_repeat 00:04:01.266 ************************************ 00:04:01.266 09:15:13 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:01.266 09:15:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:01.266 09:15:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:01.266 09:15:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:01.266 09:15:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:01.266 09:15:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:01.266 09:15:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:01.266 09:15:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:01.266 09:15:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3140112 00:04:01.266 09:15:13 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:01.266 09:15:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.266 09:15:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3140112' 00:04:01.266 Process app_repeat pid: 3140112 00:04:01.266 09:15:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:01.266 09:15:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:01.266 spdk_app_start Round 0 00:04:01.266 09:15:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3140112 /var/tmp/spdk-nbd.sock 00:04:01.266 09:15:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3140112 ']' 00:04:01.266 09:15:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:01.266 09:15:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.266 09:15:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:01.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:01.266 09:15:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.266 09:15:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:01.524 [2024-12-13 09:15:13.640387] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:01.524 [2024-12-13 09:15:13.640463] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3140112 ] 00:04:01.524 [2024-12-13 09:15:13.706697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:01.524 [2024-12-13 09:15:13.752042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:01.524 [2024-12-13 09:15:13.752046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.524 09:15:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.524 09:15:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:01.525 09:15:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:01.783 Malloc0 00:04:01.783 09:15:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:02.041 Malloc1 00:04:02.041 09:15:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:02.041 09:15:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:02.041 09:15:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:02.041 09:15:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:02.041 09:15:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:02.041 09:15:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:02.041 09:15:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:02.041 09:15:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:02.041 09:15:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:02.041 09:15:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:02.041 09:15:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:02.041 09:15:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:02.041 09:15:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:02.041 09:15:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:02.041 09:15:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:02.041 09:15:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:02.300 /dev/nbd0 00:04:02.300 09:15:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:02.300 09:15:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:02.300 09:15:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:02.300 09:15:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:02.300 09:15:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:02.300 09:15:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:02.300 09:15:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:02.300 09:15:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:02.300 09:15:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:02.300 09:15:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:02.300 09:15:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:02.300 1+0 records in 00:04:02.300 1+0 records out 00:04:02.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206916 s, 19.8 MB/s 00:04:02.300 09:15:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:02.300 09:15:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:02.300 09:15:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:02.300 09:15:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:02.300 09:15:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:02.300 09:15:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:02.300 09:15:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:02.300 09:15:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:02.559 /dev/nbd1 00:04:02.559 09:15:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:02.559 09:15:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:02.559 09:15:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:02.559 09:15:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:02.559 09:15:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:02.559 09:15:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:02.559 09:15:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:02.559 09:15:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:02.559 09:15:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:02.559 09:15:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:02.559 09:15:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:02.559 1+0 records in 00:04:02.559 1+0 records out 00:04:02.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191907 s, 21.3 MB/s 00:04:02.559 09:15:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:02.559 09:15:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:02.559 09:15:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:02.559 09:15:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:02.559 09:15:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:02.559 09:15:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:02.559 09:15:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:02.559 09:15:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:02.559 09:15:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:02.559 09:15:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:02.559 09:15:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:02.559 { 00:04:02.559 "nbd_device": "/dev/nbd0", 00:04:02.559 "bdev_name": "Malloc0" 00:04:02.559 }, 00:04:02.559 { 00:04:02.559 "nbd_device": "/dev/nbd1", 00:04:02.559 "bdev_name": "Malloc1" 00:04:02.559 } 00:04:02.559 ]' 00:04:02.559 09:15:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:02.559 { 00:04:02.559 "nbd_device": "/dev/nbd0", 00:04:02.559 "bdev_name": "Malloc0" 00:04:02.559 }, 00:04:02.559 { 00:04:02.559 "nbd_device": "/dev/nbd1", 00:04:02.559 "bdev_name": "Malloc1" 00:04:02.559 } 00:04:02.559 ]' 00:04:02.559 09:15:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:02.818 /dev/nbd1' 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:02.818 /dev/nbd1' 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:02.818 256+0 records in 00:04:02.818 256+0 records out 00:04:02.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107923 s, 97.2 MB/s 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:02.818 256+0 records in 00:04:02.818 256+0 records out 00:04:02.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013971 s, 75.1 MB/s 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:02.818 09:15:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:02.818 256+0 records in 00:04:02.818 256+0 records out 00:04:02.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156444 s, 67.0 MB/s 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:02.818 09:15:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:03.077 09:15:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:03.077 09:15:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:03.077 09:15:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:03.077 09:15:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:03.077 09:15:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:03.077 09:15:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:03.077 09:15:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:03.077 09:15:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:03.077 09:15:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:03.077 09:15:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:03.077 09:15:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:03.335 09:15:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:03.335 09:15:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:03.593 09:15:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:03.852 [2024-12-13 09:15:16.030944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:03.852 [2024-12-13 09:15:16.067905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.852 [2024-12-13 09:15:16.067908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.852 [2024-12-13 09:15:16.107463] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:03.852 [2024-12-13 09:15:16.107503] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:07.139 09:15:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:07.139 09:15:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:07.139 spdk_app_start Round 1 00:04:07.139 09:15:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3140112 /var/tmp/spdk-nbd.sock 00:04:07.139 09:15:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3140112 ']' 00:04:07.139 09:15:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:07.139 09:15:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.139 09:15:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:07.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:07.139 09:15:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.139 09:15:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:07.139 09:15:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.139 09:15:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:07.139 09:15:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:07.139 Malloc0 00:04:07.139 09:15:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:07.139 Malloc1 00:04:07.139 09:15:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:07.139 09:15:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:07.139 09:15:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:07.139 09:15:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:07.139 09:15:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.139 09:15:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:07.139 09:15:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:07.139 09:15:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:07.139 09:15:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:07.139 09:15:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:07.139 09:15:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.139 09:15:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:07.139 09:15:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:07.139 09:15:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:07.139 09:15:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:07.139 09:15:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:07.398 /dev/nbd0 00:04:07.398 09:15:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:07.398 09:15:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:07.398 09:15:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:07.398 09:15:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:07.398 09:15:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:07.398 09:15:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:07.398 09:15:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:07.398 09:15:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:07.398 09:15:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:07.398 09:15:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:07.398 09:15:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:07.398 1+0 records in 00:04:07.398 1+0 records out 00:04:07.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279233 s, 14.7 MB/s 00:04:07.398 09:15:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.398 09:15:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:07.398 09:15:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.398 09:15:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:07.398 09:15:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:07.398 09:15:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:07.398 09:15:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:07.398 09:15:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:07.656 /dev/nbd1 00:04:07.656 09:15:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:07.656 09:15:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:07.656 09:15:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:07.656 09:15:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:07.656 09:15:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:07.656 09:15:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:07.656 09:15:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:07.656 09:15:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:07.656 09:15:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:07.656 09:15:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:07.656 09:15:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:07.656 1+0 records in 00:04:07.656 1+0 records out 00:04:07.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214236 s, 19.1 MB/s 00:04:07.656 09:15:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.656 09:15:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:07.656 09:15:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.656 09:15:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:07.656 09:15:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:07.656 09:15:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:07.656 09:15:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:07.657 09:15:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:07.657 09:15:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:07.657 09:15:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:07.915 { 00:04:07.915 "nbd_device": "/dev/nbd0", 00:04:07.915 "bdev_name": "Malloc0" 00:04:07.915 }, 00:04:07.915 { 00:04:07.915 "nbd_device": "/dev/nbd1", 00:04:07.915 "bdev_name": "Malloc1" 00:04:07.915 } 00:04:07.915 ]' 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:07.915 { 00:04:07.915 "nbd_device": "/dev/nbd0", 00:04:07.915 "bdev_name": "Malloc0" 00:04:07.915 }, 00:04:07.915 { 00:04:07.915 "nbd_device": "/dev/nbd1", 00:04:07.915 "bdev_name": "Malloc1" 00:04:07.915 } 00:04:07.915 ]' 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:07.915 /dev/nbd1' 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:07.915 /dev/nbd1' 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:07.915 09:15:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:07.915 256+0 records in 00:04:07.915 256+0 records out 00:04:07.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010493 s, 99.9 MB/s 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:07.916 256+0 records in 00:04:07.916 256+0 records out 00:04:07.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143729 s, 73.0 MB/s 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:07.916 256+0 records in 00:04:07.916 256+0 records out 00:04:07.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150197 s, 69.8 MB/s 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:07.916 09:15:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:08.174 09:15:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:08.174 09:15:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:08.174 09:15:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:08.174 09:15:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:08.174 09:15:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:08.174 09:15:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:08.174 09:15:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:08.174 09:15:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:08.174 09:15:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:08.174 09:15:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:08.432 09:15:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:08.432 09:15:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:08.432 09:15:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:08.432 09:15:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:08.432 09:15:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:08.432 09:15:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:08.432 09:15:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:08.432 09:15:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:08.432 09:15:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:08.432 09:15:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:08.432 09:15:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:08.690 09:15:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:08.690 09:15:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:08.690 09:15:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:08.690 09:15:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:08.690 09:15:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:08.690 09:15:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:08.690 09:15:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:08.690 09:15:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:08.691 09:15:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:08.691 09:15:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:08.691 09:15:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:08.691 09:15:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:08.691 09:15:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:08.949 09:15:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:08.949 [2024-12-13 09:15:21.290909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:09.208 [2024-12-13 09:15:21.328831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.208 [2024-12-13 09:15:21.328834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.208 [2024-12-13 09:15:21.369896] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:09.208 [2024-12-13 09:15:21.369937] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:12.494 09:15:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:12.494 09:15:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:12.494 spdk_app_start Round 2 00:04:12.494 09:15:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3140112 /var/tmp/spdk-nbd.sock 00:04:12.494 09:15:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3140112 ']' 00:04:12.494 09:15:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:12.494 09:15:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.494 09:15:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:12.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:12.494 09:15:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.494 09:15:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:12.494 09:15:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.494 09:15:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:12.494 09:15:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.494 Malloc0 00:04:12.494 09:15:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.494 Malloc1 00:04:12.494 09:15:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.494 09:15:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.494 09:15:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.494 09:15:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:12.494 09:15:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.494 09:15:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:12.494 09:15:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.494 09:15:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.494 09:15:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.494 09:15:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:12.494 09:15:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.494 09:15:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:12.494 09:15:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:12.494 09:15:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:12.494 09:15:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.494 09:15:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:12.753 /dev/nbd0 00:04:12.753 09:15:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:12.753 09:15:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:12.753 09:15:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:12.753 09:15:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:12.753 09:15:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:12.753 09:15:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:12.753 09:15:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:12.753 09:15:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:12.753 09:15:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:12.753 09:15:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:12.753 09:15:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:12.753 1+0 records in 00:04:12.753 1+0 records out 00:04:12.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180631 s, 22.7 MB/s 00:04:12.753 09:15:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.753 09:15:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:12.753 09:15:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.753 09:15:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:12.753 09:15:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:12.753 09:15:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:12.753 09:15:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.753 09:15:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:13.012 /dev/nbd1 00:04:13.012 09:15:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:13.012 09:15:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:13.012 09:15:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:13.012 09:15:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:13.012 09:15:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:13.012 09:15:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:13.012 09:15:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:13.012 09:15:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:13.012 09:15:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:13.012 09:15:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:13.012 09:15:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:13.012 1+0 records in 00:04:13.012 1+0 records out 00:04:13.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189157 s, 21.7 MB/s 00:04:13.012 09:15:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.012 09:15:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:13.012 09:15:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.012 09:15:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:13.012 09:15:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:13.012 09:15:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:13.012 09:15:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.012 09:15:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.012 09:15:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.012 09:15:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:13.271 { 00:04:13.271 "nbd_device": "/dev/nbd0", 00:04:13.271 "bdev_name": "Malloc0" 00:04:13.271 }, 00:04:13.271 { 00:04:13.271 "nbd_device": "/dev/nbd1", 00:04:13.271 "bdev_name": "Malloc1" 00:04:13.271 } 00:04:13.271 ]' 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:13.271 { 00:04:13.271 "nbd_device": "/dev/nbd0", 00:04:13.271 "bdev_name": "Malloc0" 00:04:13.271 }, 00:04:13.271 { 00:04:13.271 "nbd_device": "/dev/nbd1", 00:04:13.271 "bdev_name": "Malloc1" 00:04:13.271 } 00:04:13.271 ]' 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:13.271 /dev/nbd1' 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:13.271 /dev/nbd1' 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:13.271 256+0 records in 00:04:13.271 256+0 records out 00:04:13.271 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104464 s, 100 MB/s 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:13.271 256+0 records in 00:04:13.271 256+0 records out 00:04:13.271 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136926 s, 76.6 MB/s 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:13.271 256+0 records in 00:04:13.271 256+0 records out 00:04:13.271 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151589 s, 69.2 MB/s 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.271 09:15:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:13.530 09:15:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:13.530 09:15:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:13.530 09:15:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:13.530 09:15:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.530 09:15:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.530 09:15:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:13.530 09:15:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:13.530 09:15:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.530 09:15:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.530 09:15:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:13.788 09:15:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:13.788 09:15:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:13.788 09:15:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:13.788 09:15:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.788 09:15:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.788 09:15:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:13.788 09:15:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:13.788 09:15:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.788 09:15:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.788 09:15:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.788 09:15:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.788 09:15:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:13.788 09:15:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:13.788 09:15:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:14.047 09:15:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:14.047 09:15:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:14.047 09:15:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:14.047 09:15:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:14.047 09:15:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:14.047 09:15:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:14.047 09:15:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:14.047 09:15:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:14.047 09:15:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:14.047 09:15:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:14.047 09:15:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:14.306 [2024-12-13 09:15:26.543200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.306 [2024-12-13 09:15:26.580687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.306 [2024-12-13 09:15:26.580690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.306 [2024-12-13 09:15:26.621093] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:14.306 [2024-12-13 09:15:26.621133] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:17.665 09:15:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3140112 /var/tmp/spdk-nbd.sock 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3140112 ']' 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:17.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:17.665 09:15:29 event.app_repeat -- event/event.sh@39 -- # killprocess 3140112 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3140112 ']' 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3140112 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3140112 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3140112' 00:04:17.665 killing process with pid 3140112 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3140112 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3140112 00:04:17.665 spdk_app_start is called in Round 0. 00:04:17.665 Shutdown signal received, stop current app iteration 00:04:17.665 Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 reinitialization... 00:04:17.665 spdk_app_start is called in Round 1. 00:04:17.665 Shutdown signal received, stop current app iteration 00:04:17.665 Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 reinitialization... 00:04:17.665 spdk_app_start is called in Round 2. 00:04:17.665 Shutdown signal received, stop current app iteration 00:04:17.665 Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 reinitialization... 00:04:17.665 spdk_app_start is called in Round 3. 00:04:17.665 Shutdown signal received, stop current app iteration 00:04:17.665 09:15:29 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:17.665 09:15:29 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:17.665 00:04:17.665 real 0m16.160s 00:04:17.665 user 0m35.451s 00:04:17.665 sys 0m2.477s 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.665 09:15:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:17.665 ************************************ 00:04:17.665 END TEST app_repeat 00:04:17.665 ************************************ 00:04:17.665 09:15:29 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:17.665 09:15:29 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:17.665 09:15:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.665 09:15:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.665 09:15:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:17.665 ************************************ 00:04:17.665 START TEST cpu_locks 00:04:17.665 ************************************ 00:04:17.665 09:15:29 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:17.665 * Looking for test storage... 00:04:17.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:17.665 09:15:29 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:17.665 09:15:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:17.665 09:15:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:17.665 09:15:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.665 09:15:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:17.666 09:15:29 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.666 09:15:29 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:17.666 09:15:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:17.666 09:15:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.666 09:15:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:17.666 09:15:29 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.666 09:15:29 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.666 09:15:29 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.666 09:15:29 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:17.666 09:15:29 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.666 09:15:29 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:17.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.666 --rc genhtml_branch_coverage=1 00:04:17.666 --rc genhtml_function_coverage=1 00:04:17.666 --rc genhtml_legend=1 00:04:17.666 --rc geninfo_all_blocks=1 00:04:17.666 --rc geninfo_unexecuted_blocks=1 00:04:17.666 00:04:17.666 ' 00:04:17.666 09:15:29 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:17.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.666 --rc genhtml_branch_coverage=1 00:04:17.666 --rc genhtml_function_coverage=1 00:04:17.666 --rc genhtml_legend=1 00:04:17.666 --rc geninfo_all_blocks=1 00:04:17.666 --rc geninfo_unexecuted_blocks=1 00:04:17.666 00:04:17.666 ' 00:04:17.666 09:15:29 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:17.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.666 --rc genhtml_branch_coverage=1 00:04:17.666 --rc genhtml_function_coverage=1 00:04:17.666 --rc genhtml_legend=1 00:04:17.666 --rc geninfo_all_blocks=1 00:04:17.666 --rc geninfo_unexecuted_blocks=1 00:04:17.666 00:04:17.666 ' 00:04:17.666 09:15:29 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:17.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.666 --rc genhtml_branch_coverage=1 00:04:17.666 --rc genhtml_function_coverage=1 00:04:17.666 --rc genhtml_legend=1 00:04:17.666 --rc geninfo_all_blocks=1 00:04:17.666 --rc geninfo_unexecuted_blocks=1 00:04:17.666 00:04:17.666 ' 00:04:17.666 09:15:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:17.666 09:15:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:17.666 09:15:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:17.666 09:15:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:17.666 09:15:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.666 09:15:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.666 09:15:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:17.983 ************************************ 00:04:17.983 START TEST default_locks 00:04:17.983 ************************************ 00:04:17.983 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:17.983 09:15:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3143230 00:04:17.983 09:15:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3143230 00:04:17.983 09:15:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:17.983 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3143230 ']' 00:04:17.983 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.983 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.983 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.983 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.983 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:17.983 [2024-12-13 09:15:30.087370] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:17.983 [2024-12-13 09:15:30.087414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143230 ] 00:04:17.983 [2024-12-13 09:15:30.149680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.983 [2024-12-13 09:15:30.189193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.241 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.241 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:18.241 09:15:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3143230 00:04:18.241 09:15:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3143230 00:04:18.241 09:15:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:18.499 lslocks: write error 00:04:18.499 09:15:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3143230 00:04:18.499 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3143230 ']' 00:04:18.499 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3143230 00:04:18.499 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:18.499 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.499 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3143230 00:04:18.758 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.758 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.758 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3143230' 00:04:18.758 killing process with pid 3143230 00:04:18.758 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3143230 00:04:18.758 09:15:30 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3143230 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3143230 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3143230 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3143230 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3143230 ']' 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:19.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3143230) - No such process 00:04:19.017 ERROR: process (pid: 3143230) is no longer running 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:19.017 00:04:19.017 real 0m1.171s 00:04:19.017 user 0m1.152s 00:04:19.017 sys 0m0.525s 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.017 09:15:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:19.017 ************************************ 00:04:19.017 END TEST default_locks 00:04:19.017 ************************************ 00:04:19.017 09:15:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:19.017 09:15:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.017 09:15:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.017 09:15:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:19.017 ************************************ 00:04:19.017 START TEST default_locks_via_rpc 00:04:19.017 ************************************ 00:04:19.017 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:19.017 09:15:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:19.017 09:15:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3143406 00:04:19.017 09:15:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3143406 00:04:19.017 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3143406 ']' 00:04:19.017 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.017 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.017 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.017 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.017 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.017 [2024-12-13 09:15:31.315845] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:19.017 [2024-12-13 09:15:31.315880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143406 ] 00:04:19.017 [2024-12-13 09:15:31.379027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.276 [2024-12-13 09:15:31.421638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.276 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.276 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:19.276 09:15:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:19.276 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.276 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.276 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.276 09:15:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:19.276 09:15:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:19.276 09:15:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:19.276 09:15:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:19.276 09:15:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:19.276 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.276 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.534 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.534 09:15:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3143406 00:04:19.534 09:15:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3143406 00:04:19.534 09:15:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:19.793 09:15:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3143406 00:04:19.793 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3143406 ']' 00:04:19.793 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3143406 00:04:19.793 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:19.793 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.793 09:15:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3143406 00:04:19.793 09:15:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.793 09:15:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.793 09:15:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3143406' 00:04:19.793 killing process with pid 3143406 00:04:19.793 09:15:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3143406 00:04:19.793 09:15:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3143406 00:04:20.052 00:04:20.052 real 0m1.056s 00:04:20.052 user 0m1.024s 00:04:20.052 sys 0m0.477s 00:04:20.052 09:15:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.052 09:15:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.052 ************************************ 00:04:20.052 END TEST default_locks_via_rpc 00:04:20.052 ************************************ 00:04:20.052 09:15:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:20.052 09:15:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.052 09:15:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.052 09:15:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:20.052 ************************************ 00:04:20.052 START TEST non_locking_app_on_locked_coremask 00:04:20.052 ************************************ 00:04:20.052 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:20.052 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3143542 00:04:20.052 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3143542 /var/tmp/spdk.sock 00:04:20.052 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.052 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3143542 ']' 00:04:20.052 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.052 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.052 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.052 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.052 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:20.311 [2024-12-13 09:15:32.454892] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:20.311 [2024-12-13 09:15:32.454937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143542 ] 00:04:20.311 [2024-12-13 09:15:32.518586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.311 [2024-12-13 09:15:32.558535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.569 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.569 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:20.569 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3143758 00:04:20.569 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3143758 /var/tmp/spdk2.sock 00:04:20.569 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:20.569 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3143758 ']' 00:04:20.569 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:20.569 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.569 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:20.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:20.569 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.569 09:15:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:20.569 [2024-12-13 09:15:32.818256] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:20.569 [2024-12-13 09:15:32.818305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143758 ] 00:04:20.569 [2024-12-13 09:15:32.904486] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:20.569 [2024-12-13 09:15:32.904516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.828 [2024-12-13 09:15:32.989464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.395 09:15:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.395 09:15:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:21.395 09:15:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3143542 00:04:21.395 09:15:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3143542 00:04:21.395 09:15:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:21.963 lslocks: write error 00:04:21.963 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3143542 00:04:21.963 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3143542 ']' 00:04:21.963 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3143542 00:04:21.963 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:21.963 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.963 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3143542 00:04:21.963 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.963 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.963 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3143542' 00:04:21.963 killing process with pid 3143542 00:04:21.963 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3143542 00:04:21.963 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3143542 00:04:22.531 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3143758 00:04:22.531 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3143758 ']' 00:04:22.531 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3143758 00:04:22.531 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:22.531 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.531 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3143758 00:04:22.531 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.531 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.531 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3143758' 00:04:22.531 killing process with pid 3143758 00:04:22.531 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3143758 00:04:22.531 09:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3143758 00:04:22.793 00:04:22.793 real 0m2.752s 00:04:22.793 user 0m2.930s 00:04:22.793 sys 0m0.896s 00:04:22.793 09:15:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.793 09:15:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:22.793 ************************************ 00:04:22.793 END TEST non_locking_app_on_locked_coremask 00:04:22.793 ************************************ 00:04:23.051 09:15:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:23.051 09:15:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.051 09:15:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.051 09:15:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:23.051 ************************************ 00:04:23.051 START TEST locking_app_on_unlocked_coremask 00:04:23.051 ************************************ 00:04:23.051 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:23.051 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3144101 00:04:23.051 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3144101 /var/tmp/spdk.sock 00:04:23.051 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:23.051 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3144101 ']' 00:04:23.051 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.051 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.051 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.051 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.051 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:23.051 [2024-12-13 09:15:35.275723] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:23.052 [2024-12-13 09:15:35.275766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144101 ] 00:04:23.052 [2024-12-13 09:15:35.340352] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:23.052 [2024-12-13 09:15:35.340377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.052 [2024-12-13 09:15:35.379778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.310 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.310 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:23.310 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3144242 00:04:23.310 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3144242 /var/tmp/spdk2.sock 00:04:23.310 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:23.310 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3144242 ']' 00:04:23.310 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:23.310 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.310 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:23.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:23.310 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.310 09:15:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:23.310 [2024-12-13 09:15:35.639580] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:23.310 [2024-12-13 09:15:35.639629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144242 ] 00:04:23.569 [2024-12-13 09:15:35.726254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.569 [2024-12-13 09:15:35.810288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.135 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.135 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:24.135 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3144242 00:04:24.135 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3144242 00:04:24.135 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:24.702 lslocks: write error 00:04:24.702 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3144101 00:04:24.702 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3144101 ']' 00:04:24.702 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3144101 00:04:24.702 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:24.702 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.702 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3144101 00:04:24.702 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.702 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.702 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3144101' 00:04:24.702 killing process with pid 3144101 00:04:24.702 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3144101 00:04:24.702 09:15:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3144101 00:04:25.270 09:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3144242 00:04:25.270 09:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3144242 ']' 00:04:25.270 09:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3144242 00:04:25.270 09:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:25.270 09:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.270 09:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3144242 00:04:25.270 09:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.270 09:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.270 09:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3144242' 00:04:25.270 killing process with pid 3144242 00:04:25.270 09:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3144242 00:04:25.270 09:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3144242 00:04:25.837 00:04:25.837 real 0m2.672s 00:04:25.837 user 0m2.830s 00:04:25.837 sys 0m0.870s 00:04:25.837 09:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.837 09:15:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:25.837 ************************************ 00:04:25.837 END TEST locking_app_on_unlocked_coremask 00:04:25.837 ************************************ 00:04:25.837 09:15:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:25.837 09:15:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.837 09:15:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.837 09:15:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:25.837 ************************************ 00:04:25.837 START TEST locking_app_on_locked_coremask 00:04:25.837 ************************************ 00:04:25.837 09:15:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:25.837 09:15:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3144626 00:04:25.837 09:15:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3144626 /var/tmp/spdk.sock 00:04:25.837 09:15:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.837 09:15:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3144626 ']' 00:04:25.837 09:15:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.837 09:15:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.838 09:15:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.838 09:15:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.838 09:15:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:25.838 [2024-12-13 09:15:38.019059] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:25.838 [2024-12-13 09:15:38.019104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144626 ] 00:04:25.838 [2024-12-13 09:15:38.082701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.838 [2024-12-13 09:15:38.122269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3144728 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3144728 /var/tmp/spdk2.sock 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3144728 /var/tmp/spdk2.sock 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3144728 /var/tmp/spdk2.sock 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3144728 ']' 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:26.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.097 09:15:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:26.097 [2024-12-13 09:15:38.382434] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:26.097 [2024-12-13 09:15:38.382486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144728 ] 00:04:26.356 [2024-12-13 09:15:38.471331] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3144626 has claimed it. 00:04:26.356 [2024-12-13 09:15:38.471369] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:26.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3144728) - No such process 00:04:26.921 ERROR: process (pid: 3144728) is no longer running 00:04:26.921 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.921 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:26.921 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:26.921 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:26.921 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:26.921 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:26.921 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3144626 00:04:26.921 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3144626 00:04:26.922 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:27.180 lslocks: write error 00:04:27.180 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3144626 00:04:27.180 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3144626 ']' 00:04:27.180 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3144626 00:04:27.180 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:27.180 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.180 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3144626 00:04:27.438 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.438 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.438 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3144626' 00:04:27.438 killing process with pid 3144626 00:04:27.438 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3144626 00:04:27.438 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3144626 00:04:27.696 00:04:27.696 real 0m1.883s 00:04:27.696 user 0m2.029s 00:04:27.696 sys 0m0.621s 00:04:27.696 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.696 09:15:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:27.696 ************************************ 00:04:27.696 END TEST locking_app_on_locked_coremask 00:04:27.696 ************************************ 00:04:27.696 09:15:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:27.696 09:15:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.696 09:15:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.696 09:15:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:27.696 ************************************ 00:04:27.696 START TEST locking_overlapped_coremask 00:04:27.696 ************************************ 00:04:27.696 09:15:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:27.696 09:15:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3144989 00:04:27.696 09:15:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3144989 /var/tmp/spdk.sock 00:04:27.696 09:15:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:27.696 09:15:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3144989 ']' 00:04:27.696 09:15:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.696 09:15:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.696 09:15:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.697 09:15:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.697 09:15:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:27.697 [2024-12-13 09:15:39.962782] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:27.697 [2024-12-13 09:15:39.962821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144989 ] 00:04:27.697 [2024-12-13 09:15:40.033375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:27.955 [2024-12-13 09:15:40.084341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.955 [2024-12-13 09:15:40.087466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:27.955 [2024-12-13 09:15:40.087469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3144998 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3144998 /var/tmp/spdk2.sock 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3144998 /var/tmp/spdk2.sock 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3144998 /var/tmp/spdk2.sock 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3144998 ']' 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:27.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.955 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:28.213 [2024-12-13 09:15:40.345483] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:28.213 [2024-12-13 09:15:40.345534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144998 ] 00:04:28.213 [2024-12-13 09:15:40.435848] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3144989 has claimed it. 00:04:28.213 [2024-12-13 09:15:40.435887] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:28.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3144998) - No such process 00:04:28.780 ERROR: process (pid: 3144998) is no longer running 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3144989 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3144989 ']' 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3144989 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.780 09:15:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3144989 00:04:28.780 09:15:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.780 09:15:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.780 09:15:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3144989' 00:04:28.780 killing process with pid 3144989 00:04:28.780 09:15:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3144989 00:04:28.780 09:15:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3144989 00:04:29.039 00:04:29.039 real 0m1.430s 00:04:29.039 user 0m3.944s 00:04:29.039 sys 0m0.388s 00:04:29.039 09:15:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.039 09:15:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:29.039 ************************************ 00:04:29.039 END TEST locking_overlapped_coremask 00:04:29.039 ************************************ 00:04:29.039 09:15:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:29.039 09:15:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.039 09:15:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.039 09:15:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.039 ************************************ 00:04:29.039 START TEST locking_overlapped_coremask_via_rpc 00:04:29.039 ************************************ 00:04:29.039 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:29.039 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3145248 00:04:29.039 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3145248 /var/tmp/spdk.sock 00:04:29.039 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:29.039 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3145248 ']' 00:04:29.039 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.039 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.039 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.040 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.040 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.298 [2024-12-13 09:15:41.454733] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:29.298 [2024-12-13 09:15:41.454774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3145248 ] 00:04:29.299 [2024-12-13 09:15:41.516504] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:29.299 [2024-12-13 09:15:41.516527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:29.299 [2024-12-13 09:15:41.560836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.299 [2024-12-13 09:15:41.560955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.299 [2024-12-13 09:15:41.560958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.557 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.557 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:29.557 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3145257 00:04:29.557 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3145257 /var/tmp/spdk2.sock 00:04:29.557 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:29.557 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3145257 ']' 00:04:29.557 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:29.557 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.557 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:29.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:29.557 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.557 09:15:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.557 [2024-12-13 09:15:41.816574] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:29.557 [2024-12-13 09:15:41.816621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3145257 ] 00:04:29.557 [2024-12-13 09:15:41.909074] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:29.557 [2024-12-13 09:15:41.909103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:29.816 [2024-12-13 09:15:41.998682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.816 [2024-12-13 09:15:42.002497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.816 [2024-12-13 09:15:42.002498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.383 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.383 [2024-12-13 09:15:42.671523] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3145248 has claimed it. 00:04:30.383 request: 00:04:30.383 { 00:04:30.383 "method": "framework_enable_cpumask_locks", 00:04:30.383 "req_id": 1 00:04:30.384 } 00:04:30.384 Got JSON-RPC error response 00:04:30.384 response: 00:04:30.384 { 00:04:30.384 "code": -32603, 00:04:30.384 "message": "Failed to claim CPU core: 2" 00:04:30.384 } 00:04:30.384 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:30.384 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:30.384 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:30.384 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:30.384 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:30.384 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3145248 /var/tmp/spdk.sock 00:04:30.384 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3145248 ']' 00:04:30.384 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.384 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.384 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.384 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.384 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.642 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.642 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.642 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3145257 /var/tmp/spdk2.sock 00:04:30.642 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3145257 ']' 00:04:30.642 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:30.642 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.642 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:30.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:30.642 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.642 09:15:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.901 09:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.901 09:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.901 09:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:30.901 09:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:30.901 09:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:30.901 09:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:30.901 00:04:30.901 real 0m1.680s 00:04:30.901 user 0m0.810s 00:04:30.901 sys 0m0.132s 00:04:30.901 09:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.901 09:15:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.901 ************************************ 00:04:30.901 END TEST locking_overlapped_coremask_via_rpc 00:04:30.901 ************************************ 00:04:30.901 09:15:43 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:30.901 09:15:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3145248 ]] 00:04:30.901 09:15:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3145248 00:04:30.901 09:15:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3145248 ']' 00:04:30.901 09:15:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3145248 00:04:30.901 09:15:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:30.901 09:15:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.901 09:15:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3145248 00:04:30.901 09:15:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.901 09:15:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.901 09:15:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3145248' 00:04:30.901 killing process with pid 3145248 00:04:30.901 09:15:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3145248 00:04:30.902 09:15:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3145248 00:04:31.160 09:15:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3145257 ]] 00:04:31.161 09:15:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3145257 00:04:31.161 09:15:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3145257 ']' 00:04:31.161 09:15:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3145257 00:04:31.161 09:15:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:31.161 09:15:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.161 09:15:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3145257 00:04:31.161 09:15:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:31.419 09:15:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:31.419 09:15:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3145257' 00:04:31.419 killing process with pid 3145257 00:04:31.419 09:15:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3145257 00:04:31.419 09:15:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3145257 00:04:31.678 09:15:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:31.678 09:15:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:31.678 09:15:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3145248 ]] 00:04:31.678 09:15:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3145248 00:04:31.678 09:15:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3145248 ']' 00:04:31.678 09:15:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3145248 00:04:31.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3145248) - No such process 00:04:31.678 09:15:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3145248 is not found' 00:04:31.678 Process with pid 3145248 is not found 00:04:31.678 09:15:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3145257 ]] 00:04:31.678 09:15:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3145257 00:04:31.678 09:15:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3145257 ']' 00:04:31.678 09:15:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3145257 00:04:31.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3145257) - No such process 00:04:31.678 09:15:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3145257 is not found' 00:04:31.678 Process with pid 3145257 is not found 00:04:31.678 09:15:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:31.678 00:04:31.678 real 0m14.004s 00:04:31.678 user 0m24.387s 00:04:31.678 sys 0m4.813s 00:04:31.678 09:15:43 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.678 09:15:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.678 ************************************ 00:04:31.678 END TEST cpu_locks 00:04:31.678 ************************************ 00:04:31.678 00:04:31.678 real 0m38.704s 00:04:31.678 user 1m14.188s 00:04:31.678 sys 0m8.236s 00:04:31.678 09:15:43 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.678 09:15:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.678 ************************************ 00:04:31.678 END TEST event 00:04:31.678 ************************************ 00:04:31.678 09:15:43 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:31.678 09:15:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.678 09:15:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.678 09:15:43 -- common/autotest_common.sh@10 -- # set +x 00:04:31.678 ************************************ 00:04:31.678 START TEST thread 00:04:31.678 ************************************ 00:04:31.678 09:15:43 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:31.678 * Looking for test storage... 00:04:31.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:31.678 09:15:44 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:31.678 09:15:44 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:04:31.678 09:15:44 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:31.938 09:15:44 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:31.938 09:15:44 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.938 09:15:44 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.938 09:15:44 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.938 09:15:44 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.938 09:15:44 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.938 09:15:44 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.938 09:15:44 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.938 09:15:44 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.938 09:15:44 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.938 09:15:44 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.938 09:15:44 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.938 09:15:44 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:31.938 09:15:44 thread -- scripts/common.sh@345 -- # : 1 00:04:31.938 09:15:44 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.938 09:15:44 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.938 09:15:44 thread -- scripts/common.sh@365 -- # decimal 1 00:04:31.938 09:15:44 thread -- scripts/common.sh@353 -- # local d=1 00:04:31.938 09:15:44 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.938 09:15:44 thread -- scripts/common.sh@355 -- # echo 1 00:04:31.938 09:15:44 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.938 09:15:44 thread -- scripts/common.sh@366 -- # decimal 2 00:04:31.938 09:15:44 thread -- scripts/common.sh@353 -- # local d=2 00:04:31.938 09:15:44 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.938 09:15:44 thread -- scripts/common.sh@355 -- # echo 2 00:04:31.938 09:15:44 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.938 09:15:44 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.938 09:15:44 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.938 09:15:44 thread -- scripts/common.sh@368 -- # return 0 00:04:31.938 09:15:44 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.938 09:15:44 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:31.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.938 --rc genhtml_branch_coverage=1 00:04:31.938 --rc genhtml_function_coverage=1 00:04:31.938 --rc genhtml_legend=1 00:04:31.938 --rc geninfo_all_blocks=1 00:04:31.938 --rc geninfo_unexecuted_blocks=1 00:04:31.938 00:04:31.938 ' 00:04:31.938 09:15:44 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:31.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.938 --rc genhtml_branch_coverage=1 00:04:31.938 --rc genhtml_function_coverage=1 00:04:31.938 --rc genhtml_legend=1 00:04:31.938 --rc geninfo_all_blocks=1 00:04:31.938 --rc geninfo_unexecuted_blocks=1 00:04:31.938 00:04:31.938 ' 00:04:31.938 09:15:44 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:31.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.938 --rc genhtml_branch_coverage=1 00:04:31.938 --rc genhtml_function_coverage=1 00:04:31.938 --rc genhtml_legend=1 00:04:31.938 --rc geninfo_all_blocks=1 00:04:31.938 --rc geninfo_unexecuted_blocks=1 00:04:31.938 00:04:31.938 ' 00:04:31.938 09:15:44 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:31.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.938 --rc genhtml_branch_coverage=1 00:04:31.938 --rc genhtml_function_coverage=1 00:04:31.938 --rc genhtml_legend=1 00:04:31.938 --rc geninfo_all_blocks=1 00:04:31.938 --rc geninfo_unexecuted_blocks=1 00:04:31.938 00:04:31.938 ' 00:04:31.938 09:15:44 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:31.938 09:15:44 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:31.938 09:15:44 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.938 09:15:44 thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.938 ************************************ 00:04:31.938 START TEST thread_poller_perf 00:04:31.938 ************************************ 00:04:31.938 09:15:44 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:31.938 [2024-12-13 09:15:44.143819] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:31.938 [2024-12-13 09:15:44.143887] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3145810 ] 00:04:31.938 [2024-12-13 09:15:44.210047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.938 [2024-12-13 09:15:44.249216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.938 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:33.326 [2024-12-13T08:15:45.692Z] ====================================== 00:04:33.326 [2024-12-13T08:15:45.692Z] busy:2104528266 (cyc) 00:04:33.326 [2024-12-13T08:15:45.692Z] total_run_count: 417000 00:04:33.326 [2024-12-13T08:15:45.692Z] tsc_hz: 2100000000 (cyc) 00:04:33.326 [2024-12-13T08:15:45.692Z] ====================================== 00:04:33.326 [2024-12-13T08:15:45.692Z] poller_cost: 5046 (cyc), 2402 (nsec) 00:04:33.326 00:04:33.326 real 0m1.169s 00:04:33.326 user 0m1.096s 00:04:33.326 sys 0m0.069s 00:04:33.326 09:15:45 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.326 09:15:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:33.326 ************************************ 00:04:33.326 END TEST thread_poller_perf 00:04:33.326 ************************************ 00:04:33.326 09:15:45 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:33.326 09:15:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:33.326 09:15:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.326 09:15:45 thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.326 ************************************ 00:04:33.326 START TEST thread_poller_perf 00:04:33.326 ************************************ 00:04:33.326 09:15:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:33.326 [2024-12-13 09:15:45.367184] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:33.326 [2024-12-13 09:15:45.367250] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3146053 ] 00:04:33.326 [2024-12-13 09:15:45.432788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.326 [2024-12-13 09:15:45.472038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.326 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:34.261 [2024-12-13T08:15:46.627Z] ====================================== 00:04:34.261 [2024-12-13T08:15:46.627Z] busy:2101529150 (cyc) 00:04:34.261 [2024-12-13T08:15:46.627Z] total_run_count: 5107000 00:04:34.261 [2024-12-13T08:15:46.627Z] tsc_hz: 2100000000 (cyc) 00:04:34.261 [2024-12-13T08:15:46.627Z] ====================================== 00:04:34.261 [2024-12-13T08:15:46.627Z] poller_cost: 411 (cyc), 195 (nsec) 00:04:34.261 00:04:34.261 real 0m1.163s 00:04:34.261 user 0m1.096s 00:04:34.261 sys 0m0.063s 00:04:34.261 09:15:46 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.261 09:15:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:34.261 ************************************ 00:04:34.262 END TEST thread_poller_perf 00:04:34.262 ************************************ 00:04:34.262 09:15:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:34.262 00:04:34.262 real 0m2.608s 00:04:34.262 user 0m2.335s 00:04:34.262 sys 0m0.284s 00:04:34.262 09:15:46 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.262 09:15:46 thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.262 ************************************ 00:04:34.262 END TEST thread 00:04:34.262 ************************************ 00:04:34.262 09:15:46 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:34.262 09:15:46 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:34.262 09:15:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.262 09:15:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.262 09:15:46 -- common/autotest_common.sh@10 -- # set +x 00:04:34.262 ************************************ 00:04:34.262 START TEST app_cmdline 00:04:34.262 ************************************ 00:04:34.262 09:15:46 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:34.520 * Looking for test storage... 00:04:34.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:34.520 09:15:46 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:34.520 09:15:46 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:04:34.520 09:15:46 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:34.520 09:15:46 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:34.520 09:15:46 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.520 09:15:46 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.520 09:15:46 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.520 09:15:46 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.520 09:15:46 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.520 09:15:46 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.520 09:15:46 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.520 09:15:46 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.521 09:15:46 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:34.521 09:15:46 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.521 09:15:46 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:34.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.521 --rc genhtml_branch_coverage=1 00:04:34.521 --rc genhtml_function_coverage=1 00:04:34.521 --rc genhtml_legend=1 00:04:34.521 --rc geninfo_all_blocks=1 00:04:34.521 --rc geninfo_unexecuted_blocks=1 00:04:34.521 00:04:34.521 ' 00:04:34.521 09:15:46 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:34.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.521 --rc genhtml_branch_coverage=1 00:04:34.521 --rc genhtml_function_coverage=1 00:04:34.521 --rc genhtml_legend=1 00:04:34.521 --rc geninfo_all_blocks=1 00:04:34.521 --rc geninfo_unexecuted_blocks=1 00:04:34.521 00:04:34.521 ' 00:04:34.521 09:15:46 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:34.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.521 --rc genhtml_branch_coverage=1 00:04:34.521 --rc genhtml_function_coverage=1 00:04:34.521 --rc genhtml_legend=1 00:04:34.521 --rc geninfo_all_blocks=1 00:04:34.521 --rc geninfo_unexecuted_blocks=1 00:04:34.521 00:04:34.521 ' 00:04:34.521 09:15:46 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:34.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.521 --rc genhtml_branch_coverage=1 00:04:34.521 --rc genhtml_function_coverage=1 00:04:34.521 --rc genhtml_legend=1 00:04:34.521 --rc geninfo_all_blocks=1 00:04:34.521 --rc geninfo_unexecuted_blocks=1 00:04:34.521 00:04:34.521 ' 00:04:34.521 09:15:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:34.521 09:15:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3146340 00:04:34.521 09:15:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:34.521 09:15:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3146340 00:04:34.521 09:15:46 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3146340 ']' 00:04:34.521 09:15:46 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.521 09:15:46 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.521 09:15:46 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.521 09:15:46 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.521 09:15:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:34.521 [2024-12-13 09:15:46.823669] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:34.521 [2024-12-13 09:15:46.823718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3146340 ] 00:04:34.521 [2024-12-13 09:15:46.885383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.779 [2024-12-13 09:15:46.924776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.779 09:15:47 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.779 09:15:47 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:34.779 09:15:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:35.037 { 00:04:35.037 "version": "SPDK v25.01-pre git sha1 575641720", 00:04:35.037 "fields": { 00:04:35.037 "major": 25, 00:04:35.037 "minor": 1, 00:04:35.037 "patch": 0, 00:04:35.037 "suffix": "-pre", 00:04:35.037 "commit": "575641720" 00:04:35.037 } 00:04:35.037 } 00:04:35.037 09:15:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:35.037 09:15:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:35.037 09:15:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:35.037 09:15:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:35.037 09:15:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:35.037 09:15:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:35.037 09:15:47 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.037 09:15:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:35.037 09:15:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:35.037 09:15:47 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.037 09:15:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:35.037 09:15:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:35.037 09:15:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:35.037 09:15:47 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:35.037 09:15:47 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:35.037 09:15:47 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:35.037 09:15:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.037 09:15:47 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:35.037 09:15:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.037 09:15:47 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:35.037 09:15:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.037 09:15:47 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:35.037 09:15:47 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:35.037 09:15:47 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:35.296 request: 00:04:35.296 { 00:04:35.296 "method": "env_dpdk_get_mem_stats", 00:04:35.296 "req_id": 1 00:04:35.296 } 00:04:35.296 Got JSON-RPC error response 00:04:35.296 response: 00:04:35.296 { 00:04:35.296 "code": -32601, 00:04:35.296 "message": "Method not found" 00:04:35.296 } 00:04:35.296 09:15:47 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:35.296 09:15:47 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:35.296 09:15:47 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:35.296 09:15:47 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:35.296 09:15:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3146340 00:04:35.296 09:15:47 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3146340 ']' 00:04:35.296 09:15:47 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3146340 00:04:35.296 09:15:47 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:35.296 09:15:47 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.296 09:15:47 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3146340 00:04:35.296 09:15:47 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.296 09:15:47 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.296 09:15:47 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3146340' 00:04:35.296 killing process with pid 3146340 00:04:35.296 09:15:47 app_cmdline -- common/autotest_common.sh@973 -- # kill 3146340 00:04:35.296 09:15:47 app_cmdline -- common/autotest_common.sh@978 -- # wait 3146340 00:04:35.863 00:04:35.863 real 0m1.330s 00:04:35.863 user 0m1.562s 00:04:35.863 sys 0m0.445s 00:04:35.863 09:15:47 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.863 09:15:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:35.863 ************************************ 00:04:35.863 END TEST app_cmdline 00:04:35.863 ************************************ 00:04:35.863 09:15:47 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:35.863 09:15:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.863 09:15:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.863 09:15:47 -- common/autotest_common.sh@10 -- # set +x 00:04:35.863 ************************************ 00:04:35.863 START TEST version 00:04:35.863 ************************************ 00:04:35.863 09:15:48 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:35.863 * Looking for test storage... 00:04:35.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:35.863 09:15:48 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:35.863 09:15:48 version -- common/autotest_common.sh@1711 -- # lcov --version 00:04:35.863 09:15:48 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:35.863 09:15:48 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:35.863 09:15:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.863 09:15:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.863 09:15:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.863 09:15:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.864 09:15:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.864 09:15:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.864 09:15:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.864 09:15:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.864 09:15:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.864 09:15:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.864 09:15:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.864 09:15:48 version -- scripts/common.sh@344 -- # case "$op" in 00:04:35.864 09:15:48 version -- scripts/common.sh@345 -- # : 1 00:04:35.864 09:15:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.864 09:15:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.864 09:15:48 version -- scripts/common.sh@365 -- # decimal 1 00:04:35.864 09:15:48 version -- scripts/common.sh@353 -- # local d=1 00:04:35.864 09:15:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.864 09:15:48 version -- scripts/common.sh@355 -- # echo 1 00:04:35.864 09:15:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.864 09:15:48 version -- scripts/common.sh@366 -- # decimal 2 00:04:35.864 09:15:48 version -- scripts/common.sh@353 -- # local d=2 00:04:35.864 09:15:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.864 09:15:48 version -- scripts/common.sh@355 -- # echo 2 00:04:35.864 09:15:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.864 09:15:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.864 09:15:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.864 09:15:48 version -- scripts/common.sh@368 -- # return 0 00:04:35.864 09:15:48 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.864 09:15:48 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:35.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.864 --rc genhtml_branch_coverage=1 00:04:35.864 --rc genhtml_function_coverage=1 00:04:35.864 --rc genhtml_legend=1 00:04:35.864 --rc geninfo_all_blocks=1 00:04:35.864 --rc geninfo_unexecuted_blocks=1 00:04:35.864 00:04:35.864 ' 00:04:35.864 09:15:48 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:35.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.864 --rc genhtml_branch_coverage=1 00:04:35.864 --rc genhtml_function_coverage=1 00:04:35.864 --rc genhtml_legend=1 00:04:35.864 --rc geninfo_all_blocks=1 00:04:35.864 --rc geninfo_unexecuted_blocks=1 00:04:35.864 00:04:35.864 ' 00:04:35.864 09:15:48 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:35.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.864 --rc genhtml_branch_coverage=1 00:04:35.864 --rc genhtml_function_coverage=1 00:04:35.864 --rc genhtml_legend=1 00:04:35.864 --rc geninfo_all_blocks=1 00:04:35.864 --rc geninfo_unexecuted_blocks=1 00:04:35.864 00:04:35.864 ' 00:04:35.864 09:15:48 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:35.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.864 --rc genhtml_branch_coverage=1 00:04:35.864 --rc genhtml_function_coverage=1 00:04:35.864 --rc genhtml_legend=1 00:04:35.864 --rc geninfo_all_blocks=1 00:04:35.864 --rc geninfo_unexecuted_blocks=1 00:04:35.864 00:04:35.864 ' 00:04:35.864 09:15:48 version -- app/version.sh@17 -- # get_header_version major 00:04:35.864 09:15:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:35.864 09:15:48 version -- app/version.sh@14 -- # cut -f2 00:04:35.864 09:15:48 version -- app/version.sh@14 -- # tr -d '"' 00:04:35.864 09:15:48 version -- app/version.sh@17 -- # major=25 00:04:35.864 09:15:48 version -- app/version.sh@18 -- # get_header_version minor 00:04:35.864 09:15:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:35.864 09:15:48 version -- app/version.sh@14 -- # cut -f2 00:04:35.864 09:15:48 version -- app/version.sh@14 -- # tr -d '"' 00:04:35.864 09:15:48 version -- app/version.sh@18 -- # minor=1 00:04:35.864 09:15:48 version -- app/version.sh@19 -- # get_header_version patch 00:04:35.864 09:15:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:35.864 09:15:48 version -- app/version.sh@14 -- # cut -f2 00:04:35.864 09:15:48 version -- app/version.sh@14 -- # tr -d '"' 00:04:35.864 09:15:48 version -- app/version.sh@19 -- # patch=0 00:04:35.864 09:15:48 version -- app/version.sh@20 -- # get_header_version suffix 00:04:35.864 09:15:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:35.864 09:15:48 version -- app/version.sh@14 -- # cut -f2 00:04:35.864 09:15:48 version -- app/version.sh@14 -- # tr -d '"' 00:04:35.864 09:15:48 version -- app/version.sh@20 -- # suffix=-pre 00:04:35.864 09:15:48 version -- app/version.sh@22 -- # version=25.1 00:04:35.864 09:15:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:35.864 09:15:48 version -- app/version.sh@28 -- # version=25.1rc0 00:04:35.864 09:15:48 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:35.864 09:15:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:36.122 09:15:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:36.122 09:15:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:36.122 00:04:36.122 real 0m0.240s 00:04:36.122 user 0m0.156s 00:04:36.122 sys 0m0.123s 00:04:36.122 09:15:48 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.122 09:15:48 version -- common/autotest_common.sh@10 -- # set +x 00:04:36.122 ************************************ 00:04:36.122 END TEST version 00:04:36.122 ************************************ 00:04:36.122 09:15:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:36.122 09:15:48 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:36.122 09:15:48 -- spdk/autotest.sh@194 -- # uname -s 00:04:36.122 09:15:48 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:36.122 09:15:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:36.122 09:15:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:36.122 09:15:48 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:36.122 09:15:48 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:36.122 09:15:48 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:36.122 09:15:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.122 09:15:48 -- common/autotest_common.sh@10 -- # set +x 00:04:36.122 09:15:48 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:36.122 09:15:48 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:36.122 09:15:48 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:36.122 09:15:48 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:36.122 09:15:48 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:36.122 09:15:48 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:36.123 09:15:48 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:36.123 09:15:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:36.123 09:15:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.123 09:15:48 -- common/autotest_common.sh@10 -- # set +x 00:04:36.123 ************************************ 00:04:36.123 START TEST nvmf_tcp 00:04:36.123 ************************************ 00:04:36.123 09:15:48 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:36.123 * Looking for test storage... 00:04:36.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:36.123 09:15:48 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.123 09:15:48 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.123 09:15:48 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.382 09:15:48 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.382 09:15:48 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:36.382 09:15:48 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.382 09:15:48 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.382 --rc genhtml_branch_coverage=1 00:04:36.382 --rc genhtml_function_coverage=1 00:04:36.382 --rc genhtml_legend=1 00:04:36.382 --rc geninfo_all_blocks=1 00:04:36.382 --rc geninfo_unexecuted_blocks=1 00:04:36.382 00:04:36.382 ' 00:04:36.382 09:15:48 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.382 --rc genhtml_branch_coverage=1 00:04:36.382 --rc genhtml_function_coverage=1 00:04:36.382 --rc genhtml_legend=1 00:04:36.382 --rc geninfo_all_blocks=1 00:04:36.382 --rc geninfo_unexecuted_blocks=1 00:04:36.382 00:04:36.382 ' 00:04:36.382 09:15:48 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.382 --rc genhtml_branch_coverage=1 00:04:36.382 --rc genhtml_function_coverage=1 00:04:36.382 --rc genhtml_legend=1 00:04:36.382 --rc geninfo_all_blocks=1 00:04:36.382 --rc geninfo_unexecuted_blocks=1 00:04:36.382 00:04:36.382 ' 00:04:36.382 09:15:48 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.382 --rc genhtml_branch_coverage=1 00:04:36.382 --rc genhtml_function_coverage=1 00:04:36.382 --rc genhtml_legend=1 00:04:36.382 --rc geninfo_all_blocks=1 00:04:36.382 --rc geninfo_unexecuted_blocks=1 00:04:36.382 00:04:36.382 ' 00:04:36.382 09:15:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:36.382 09:15:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:36.382 09:15:48 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:36.382 09:15:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:36.382 09:15:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.382 09:15:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.382 ************************************ 00:04:36.382 START TEST nvmf_target_core 00:04:36.382 ************************************ 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:36.382 * Looking for test storage... 00:04:36.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.382 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.383 --rc genhtml_branch_coverage=1 00:04:36.383 --rc genhtml_function_coverage=1 00:04:36.383 --rc genhtml_legend=1 00:04:36.383 --rc geninfo_all_blocks=1 00:04:36.383 --rc geninfo_unexecuted_blocks=1 00:04:36.383 00:04:36.383 ' 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.383 --rc genhtml_branch_coverage=1 00:04:36.383 --rc genhtml_function_coverage=1 00:04:36.383 --rc genhtml_legend=1 00:04:36.383 --rc geninfo_all_blocks=1 00:04:36.383 --rc geninfo_unexecuted_blocks=1 00:04:36.383 00:04:36.383 ' 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.383 --rc genhtml_branch_coverage=1 00:04:36.383 --rc genhtml_function_coverage=1 00:04:36.383 --rc genhtml_legend=1 00:04:36.383 --rc geninfo_all_blocks=1 00:04:36.383 --rc geninfo_unexecuted_blocks=1 00:04:36.383 00:04:36.383 ' 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.383 --rc genhtml_branch_coverage=1 00:04:36.383 --rc genhtml_function_coverage=1 00:04:36.383 --rc genhtml_legend=1 00:04:36.383 --rc geninfo_all_blocks=1 00:04:36.383 --rc geninfo_unexecuted_blocks=1 00:04:36.383 00:04:36.383 ' 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:36.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.383 09:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:36.643 ************************************ 00:04:36.643 START TEST nvmf_abort 00:04:36.643 ************************************ 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:36.643 * Looking for test storage... 00:04:36.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.643 --rc genhtml_branch_coverage=1 00:04:36.643 --rc genhtml_function_coverage=1 00:04:36.643 --rc genhtml_legend=1 00:04:36.643 --rc geninfo_all_blocks=1 00:04:36.643 --rc geninfo_unexecuted_blocks=1 00:04:36.643 00:04:36.643 ' 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.643 --rc genhtml_branch_coverage=1 00:04:36.643 --rc genhtml_function_coverage=1 00:04:36.643 --rc genhtml_legend=1 00:04:36.643 --rc geninfo_all_blocks=1 00:04:36.643 --rc geninfo_unexecuted_blocks=1 00:04:36.643 00:04:36.643 ' 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.643 --rc genhtml_branch_coverage=1 00:04:36.643 --rc genhtml_function_coverage=1 00:04:36.643 --rc genhtml_legend=1 00:04:36.643 --rc geninfo_all_blocks=1 00:04:36.643 --rc geninfo_unexecuted_blocks=1 00:04:36.643 00:04:36.643 ' 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.643 --rc genhtml_branch_coverage=1 00:04:36.643 --rc genhtml_function_coverage=1 00:04:36.643 --rc genhtml_legend=1 00:04:36.643 --rc geninfo_all_blocks=1 00:04:36.643 --rc geninfo_unexecuted_blocks=1 00:04:36.643 00:04:36.643 ' 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.643 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:36.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:36.644 09:15:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:04:43.212 Found 0000:af:00.0 (0x8086 - 0x159b) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:04:43.212 Found 0000:af:00.1 (0x8086 - 0x159b) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:04:43.212 Found net devices under 0000:af:00.0: cvl_0_0 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:04:43.212 Found net devices under 0000:af:00.1: cvl_0_1 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:43.212 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:43.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:43.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:04:43.213 00:04:43.213 --- 10.0.0.2 ping statistics --- 00:04:43.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:43.213 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:43.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:43.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:04:43.213 00:04:43.213 --- 10.0.0.1 ping statistics --- 00:04:43.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:43.213 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3149955 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3149955 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3149955 ']' 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.213 09:15:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:43.213 [2024-12-13 09:15:54.979989] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:43.213 [2024-12-13 09:15:54.980032] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:43.213 [2024-12-13 09:15:55.044316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:43.213 [2024-12-13 09:15:55.083762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:43.213 [2024-12-13 09:15:55.083801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:43.213 [2024-12-13 09:15:55.083807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:43.213 [2024-12-13 09:15:55.083813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:43.213 [2024-12-13 09:15:55.083817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:43.213 [2024-12-13 09:15:55.085176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.213 [2024-12-13 09:15:55.085258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.213 [2024-12-13 09:15:55.085259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:43.213 [2024-12-13 09:15:55.229185] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:43.213 Malloc0 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:43.213 Delay0 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:43.213 [2024-12-13 09:15:55.290030] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.213 09:15:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:43.213 [2024-12-13 09:15:55.446592] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:45.117 Initializing NVMe Controllers 00:04:45.117 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:45.117 controller IO queue size 128 less than required 00:04:45.117 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:45.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:45.117 Initialization complete. Launching workers. 00:04:45.117 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37827 00:04:45.117 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37888, failed to submit 62 00:04:45.117 success 37831, unsuccessful 57, failed 0 00:04:45.117 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:45.117 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.117 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:45.376 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.376 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:45.376 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:45.376 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:45.376 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:45.376 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:45.376 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:45.376 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:45.377 rmmod nvme_tcp 00:04:45.377 rmmod nvme_fabrics 00:04:45.377 rmmod nvme_keyring 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3149955 ']' 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3149955 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3149955 ']' 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3149955 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3149955 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3149955' 00:04:45.377 killing process with pid 3149955 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3149955 00:04:45.377 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3149955 00:04:45.660 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:45.660 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:45.660 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:45.660 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:04:45.660 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:45.660 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:04:45.660 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:04:45.660 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:45.660 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:45.660 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:45.660 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:45.660 09:15:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:47.567 09:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:47.567 00:04:47.567 real 0m11.087s 00:04:47.567 user 0m11.482s 00:04:47.567 sys 0m5.378s 00:04:47.567 09:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.567 09:15:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:47.567 ************************************ 00:04:47.567 END TEST nvmf_abort 00:04:47.567 ************************************ 00:04:47.567 09:15:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:47.567 09:15:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:47.567 09:15:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.567 09:15:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:47.826 ************************************ 00:04:47.826 START TEST nvmf_ns_hotplug_stress 00:04:47.826 ************************************ 00:04:47.826 09:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:47.826 * Looking for test storage... 00:04:47.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.826 --rc genhtml_branch_coverage=1 00:04:47.826 --rc genhtml_function_coverage=1 00:04:47.826 --rc genhtml_legend=1 00:04:47.826 --rc geninfo_all_blocks=1 00:04:47.826 --rc geninfo_unexecuted_blocks=1 00:04:47.826 00:04:47.826 ' 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.826 --rc genhtml_branch_coverage=1 00:04:47.826 --rc genhtml_function_coverage=1 00:04:47.826 --rc genhtml_legend=1 00:04:47.826 --rc geninfo_all_blocks=1 00:04:47.826 --rc geninfo_unexecuted_blocks=1 00:04:47.826 00:04:47.826 ' 00:04:47.826 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.826 --rc genhtml_branch_coverage=1 00:04:47.826 --rc genhtml_function_coverage=1 00:04:47.826 --rc genhtml_legend=1 00:04:47.826 --rc geninfo_all_blocks=1 00:04:47.826 --rc geninfo_unexecuted_blocks=1 00:04:47.827 00:04:47.827 ' 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.827 --rc genhtml_branch_coverage=1 00:04:47.827 --rc genhtml_function_coverage=1 00:04:47.827 --rc genhtml_legend=1 00:04:47.827 --rc geninfo_all_blocks=1 00:04:47.827 --rc geninfo_unexecuted_blocks=1 00:04:47.827 00:04:47.827 ' 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:04:47.827 09:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:04:53.099 Found 0000:af:00.0 (0x8086 - 0x159b) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:04:53.099 Found 0000:af:00.1 (0x8086 - 0x159b) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:04:53.099 Found net devices under 0000:af:00.0: cvl_0_0 00:04:53.099 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:04:53.100 Found net devices under 0000:af:00.1: cvl_0_1 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:53.100 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:53.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:53.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:04:53.359 00:04:53.359 --- 10.0.0.2 ping statistics --- 00:04:53.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:53.359 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:53.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:53.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:04:53.359 00:04:53.359 --- 10.0.0.1 ping statistics --- 00:04:53.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:53.359 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:53.359 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:53.618 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:04:53.618 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:53.618 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.619 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:53.619 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3153902 00:04:53.619 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3153902 00:04:53.619 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:53.619 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3153902 ']' 00:04:53.619 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.619 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.619 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.619 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.619 09:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:53.619 [2024-12-13 09:16:05.799129] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:04:53.619 [2024-12-13 09:16:05.799177] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:53.619 [2024-12-13 09:16:05.866036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:53.619 [2024-12-13 09:16:05.906429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:53.619 [2024-12-13 09:16:05.906467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:53.619 [2024-12-13 09:16:05.906475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:53.619 [2024-12-13 09:16:05.906480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:53.619 [2024-12-13 09:16:05.906485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:53.619 [2024-12-13 09:16:05.907674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.619 [2024-12-13 09:16:05.907760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:53.619 [2024-12-13 09:16:05.907761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.877 09:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.877 09:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:04:53.877 09:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:53.877 09:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.877 09:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:53.877 09:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:53.877 09:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:04:53.877 09:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:04:53.877 [2024-12-13 09:16:06.220697] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.136 09:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:04:54.136 09:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:04:54.394 [2024-12-13 09:16:06.630209] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:54.394 09:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:54.653 09:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:04:54.912 Malloc0 00:04:54.912 09:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:54.912 Delay0 00:04:54.912 09:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:55.170 09:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:04:55.428 NULL1 00:04:55.428 09:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:04:55.687 09:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3154373 00:04:55.687 09:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:04:55.687 09:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:04:55.687 09:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:57.062 Read completed with error (sct=0, sc=11) 00:04:57.062 09:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:57.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:57.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:57.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:57.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:57.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:57.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:57.063 09:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:04:57.063 09:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:04:57.321 true 00:04:57.321 09:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:04:57.321 09:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:58.257 09:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:58.257 09:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:04:58.257 09:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:04:58.516 true 00:04:58.516 09:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:04:58.516 09:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:58.516 09:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:58.775 09:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:04:58.775 09:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:04:59.033 true 00:04:59.033 09:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:04:59.033 09:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:00.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.410 09:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:00.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.410 09:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:00.410 09:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:00.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.410 true 00:05:00.410 09:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:00.410 09:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:00.669 09:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:00.927 09:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:00.927 09:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:01.186 true 00:05:01.186 09:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:01.186 09:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:01.445 09:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:01.445 09:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:01.445 09:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:01.703 true 00:05:01.703 09:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:01.703 09:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:01.962 09:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:02.220 09:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:02.220 09:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:02.220 true 00:05:02.220 09:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:02.220 09:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:03.598 09:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:03.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.598 09:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:03.598 09:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:03.859 true 00:05:03.859 09:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:03.859 09:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:04.192 09:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:04.192 09:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:04.192 09:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:04.548 true 00:05:04.548 09:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:04.548 09:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:05.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.486 09:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:05.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.745 09:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:05.745 09:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:06.003 true 00:05:06.003 09:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:06.003 09:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:06.939 09:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.939 09:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:06.939 09:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:07.197 true 00:05:07.197 09:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:07.197 09:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.456 09:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.714 09:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:07.714 09:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:07.714 true 00:05:07.714 09:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:07.714 09:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.090 09:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.091 09:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:09.091 09:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:09.349 true 00:05:09.349 09:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:09.349 09:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.285 09:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.544 09:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:10.544 09:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:10.544 true 00:05:10.544 09:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:10.544 09:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.802 09:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.060 09:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:11.060 09:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:11.319 true 00:05:11.319 09:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:11.319 09:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.256 09:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.514 09:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:12.515 09:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:12.774 true 00:05:12.774 09:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:12.774 09:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.709 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.709 09:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.709 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.709 09:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:13.709 09:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:13.968 true 00:05:13.968 09:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:13.968 09:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.226 09:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.485 09:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:14.485 09:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:14.485 true 00:05:14.485 09:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:14.485 09:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.861 09:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.120 09:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:16.120 09:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:16.120 true 00:05:16.120 09:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:16.120 09:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.056 09:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.314 09:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:17.314 09:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:17.314 true 00:05:17.314 09:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:17.314 09:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.573 09:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.831 09:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:17.831 09:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:17.831 true 00:05:18.089 09:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:18.089 09:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.025 09:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.283 09:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:19.283 09:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:19.542 true 00:05:19.542 09:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:19.542 09:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.477 09:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.477 09:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:20.477 09:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:20.736 true 00:05:20.736 09:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:20.736 09:16:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.995 09:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.995 09:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:20.995 09:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:21.254 true 00:05:21.254 09:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:21.254 09:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.190 09:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.449 09:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:22.449 09:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:22.708 true 00:05:22.708 09:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:22.708 09:16:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.643 09:16:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.643 09:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:23.643 09:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:23.902 true 00:05:23.902 09:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:23.902 09:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.161 09:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.419 09:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:24.419 09:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:24.419 true 00:05:24.678 09:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:24.678 09:16:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.613 09:16:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.872 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.872 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.872 Initializing NVMe Controllers 00:05:25.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:25.872 Controller IO queue size 128, less than required. 00:05:25.872 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:25.872 Controller IO queue size 128, less than required. 00:05:25.872 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:25.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:25.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:25.872 Initialization complete. Launching workers. 00:05:25.872 ======================================================== 00:05:25.872 Latency(us) 00:05:25.872 Device Information : IOPS MiB/s Average min max 00:05:25.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2200.20 1.07 38326.33 2064.89 1058280.98 00:05:25.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17417.63 8.50 7348.82 2581.27 446514.98 00:05:25.872 ======================================================== 00:05:25.872 Total : 19617.83 9.58 10823.05 2064.89 1058280.98 00:05:25.872 00:05:25.872 09:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:25.872 09:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:26.130 true 00:05:26.130 09:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3154373 00:05:26.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3154373) - No such process 00:05:26.130 09:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3154373 00:05:26.130 09:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.389 09:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:26.389 09:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:26.389 09:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:26.389 09:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:26.389 09:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:26.389 09:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:26.648 null0 00:05:26.648 09:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:26.648 09:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:26.648 09:16:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:26.906 null1 00:05:26.906 09:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:26.906 09:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:26.906 09:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:27.165 null2 00:05:27.165 09:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:27.165 09:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:27.165 09:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:27.165 null3 00:05:27.165 09:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:27.165 09:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:27.165 09:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:27.423 null4 00:05:27.423 09:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:27.423 09:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:27.423 09:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:27.682 null5 00:05:27.682 09:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:27.682 09:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:27.682 09:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:27.940 null6 00:05:27.940 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:27.940 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:27.940 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:27.940 null7 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:28.199 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:28.200 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.200 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3159828 3159829 3159831 3159833 3159837 3159838 3159840 3159841 00:05:28.200 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:28.200 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:28.200 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:28.200 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:28.200 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.200 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:28.200 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:28.200 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:28.200 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.200 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:28.200 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:28.200 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:28.200 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:28.458 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:28.458 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.458 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.458 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:28.458 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.458 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.458 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:28.458 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.458 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.459 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:28.718 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:28.718 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:28.718 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:28.718 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:28.718 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:28.718 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.718 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:28.718 09:16:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:28.977 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.236 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:29.495 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:29.495 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:29.495 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.495 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:29.495 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:29.495 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:29.495 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:29.495 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.754 09:16:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:29.754 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.754 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.754 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:30.013 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:30.013 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.013 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:30.013 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:30.013 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:30.013 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:30.013 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:30.013 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:30.272 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.272 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.272 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:30.272 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.272 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.272 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:30.272 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.272 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.272 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:30.272 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.272 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.272 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:30.272 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.272 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.272 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:30.273 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:30.531 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.532 09:16:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:30.791 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:30.791 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:30.791 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.791 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:30.791 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:30.791 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:30.791 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:30.791 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.050 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.309 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:31.567 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.567 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:31.567 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:31.567 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:31.567 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:31.567 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:31.567 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:31.567 09:16:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.825 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:32.084 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.084 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:32.084 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:32.084 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:32.084 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:32.084 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:32.084 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:32.084 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:32.342 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.342 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.342 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.342 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.342 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.342 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.342 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.342 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.342 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.342 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:32.343 rmmod nvme_tcp 00:05:32.343 rmmod nvme_fabrics 00:05:32.343 rmmod nvme_keyring 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3153902 ']' 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3153902 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3153902 ']' 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3153902 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3153902 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3153902' 00:05:32.343 killing process with pid 3153902 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3153902 00:05:32.343 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3153902 00:05:32.602 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:32.602 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:32.602 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:32.602 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:32.602 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:32.602 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:32.602 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:32.602 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:32.602 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:32.602 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:32.602 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:32.602 09:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:34.507 09:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:34.507 00:05:34.507 real 0m46.924s 00:05:34.507 user 3m12.829s 00:05:34.507 sys 0m15.172s 00:05:34.507 09:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.507 09:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.507 ************************************ 00:05:34.507 END TEST nvmf_ns_hotplug_stress 00:05:34.507 ************************************ 00:05:34.767 09:16:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:34.767 09:16:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:34.767 09:16:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.767 09:16:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:34.767 ************************************ 00:05:34.767 START TEST nvmf_delete_subsystem 00:05:34.767 ************************************ 00:05:34.767 09:16:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:34.767 * Looking for test storage... 00:05:34.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:34.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.767 --rc genhtml_branch_coverage=1 00:05:34.767 --rc genhtml_function_coverage=1 00:05:34.767 --rc genhtml_legend=1 00:05:34.767 --rc geninfo_all_blocks=1 00:05:34.767 --rc geninfo_unexecuted_blocks=1 00:05:34.767 00:05:34.767 ' 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:34.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.767 --rc genhtml_branch_coverage=1 00:05:34.767 --rc genhtml_function_coverage=1 00:05:34.767 --rc genhtml_legend=1 00:05:34.767 --rc geninfo_all_blocks=1 00:05:34.767 --rc geninfo_unexecuted_blocks=1 00:05:34.767 00:05:34.767 ' 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:34.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.767 --rc genhtml_branch_coverage=1 00:05:34.767 --rc genhtml_function_coverage=1 00:05:34.767 --rc genhtml_legend=1 00:05:34.767 --rc geninfo_all_blocks=1 00:05:34.767 --rc geninfo_unexecuted_blocks=1 00:05:34.767 00:05:34.767 ' 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:34.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.767 --rc genhtml_branch_coverage=1 00:05:34.767 --rc genhtml_function_coverage=1 00:05:34.767 --rc genhtml_legend=1 00:05:34.767 --rc geninfo_all_blocks=1 00:05:34.767 --rc geninfo_unexecuted_blocks=1 00:05:34.767 00:05:34.767 ' 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.767 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.768 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:35.028 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:35.028 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:35.028 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:35.028 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:35.028 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:35.028 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:35.028 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:35.028 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:35.028 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:35.028 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:35.028 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:35.028 09:16:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:40.300 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:40.300 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:40.300 Found net devices under 0000:af:00.0: cvl_0_0 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:40.300 Found net devices under 0000:af:00.1: cvl_0_1 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:40.300 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:40.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:40.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:05:40.301 00:05:40.301 --- 10.0.0.2 ping statistics --- 00:05:40.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:40.301 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:40.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:40.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:05:40.301 00:05:40.301 --- 10.0.0.1 ping statistics --- 00:05:40.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:40.301 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3164139 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3164139 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3164139 ']' 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.301 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.301 [2024-12-13 09:16:52.633238] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:40.301 [2024-12-13 09:16:52.633288] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:40.560 [2024-12-13 09:16:52.696452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.560 [2024-12-13 09:16:52.738654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:40.560 [2024-12-13 09:16:52.738690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:40.560 [2024-12-13 09:16:52.738698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:40.560 [2024-12-13 09:16:52.738704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:40.560 [2024-12-13 09:16:52.738709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:40.560 [2024-12-13 09:16:52.739837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.560 [2024-12-13 09:16:52.739840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.560 [2024-12-13 09:16:52.881139] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.560 [2024-12-13 09:16:52.901354] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.560 NULL1 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.560 Delay0 00:05:40.560 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.561 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.561 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.561 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.820 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.820 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3164165 00:05:40.820 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:40.820 09:16:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:40.820 [2024-12-13 09:16:52.993082] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:42.724 09:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:42.724 09:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.724 09:16:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 [2024-12-13 09:16:55.204510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0298000c80 is same with the state(6) to be set 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Write completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 starting I/O failed: -6 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.983 Read completed with error (sct=0, sc=8) 00:05:42.984 [2024-12-13 09:16:55.205128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6960 is same with the state(6) to be set 00:05:42.984 Write completed with error (sct=0, sc=8) 00:05:42.984 Read completed with error (sct=0, sc=8) 00:05:42.984 Read completed with error (sct=0, sc=8) 00:05:42.984 Write completed with error (sct=0, sc=8) 00:05:42.984 Read completed with error (sct=0, sc=8) 00:05:42.984 Read completed with error (sct=0, sc=8) 00:05:42.984 Read completed with error (sct=0, sc=8) 00:05:42.984 Read completed with error (sct=0, sc=8) 00:05:42.984 Write completed with error (sct=0, sc=8) 00:05:42.984 Read completed with error (sct=0, sc=8) 00:05:43.948 [2024-12-13 09:16:56.171903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f79b0 is same with the state(6) to be set 00:05:43.948 Write completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Write completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Write completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Write completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Write completed with error (sct=0, sc=8) 00:05:43.948 Write completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Write completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.948 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 [2024-12-13 09:16:56.207004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6780 is same with the state(6) to be set 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 [2024-12-13 09:16:56.207160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f62c0 is same with the state(6) to be set 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 [2024-12-13 09:16:56.207310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6b40 is same with the state(6) to be set 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Write completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 Read completed with error (sct=0, sc=8) 00:05:43.949 [2024-12-13 09:16:56.207910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f029800d390 is same with the state(6) to be set 00:05:43.949 Initializing NVMe Controllers 00:05:43.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:43.949 Controller IO queue size 128, less than required. 00:05:43.949 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:43.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:43.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:43.949 Initialization complete. Launching workers. 00:05:43.949 ======================================================== 00:05:43.949 Latency(us) 00:05:43.949 Device Information : IOPS MiB/s Average min max 00:05:43.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 191.04 0.09 947637.04 1441.21 1012197.00 00:05:43.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.79 0.08 867364.25 370.27 1010067.89 00:05:43.949 ======================================================== 00:05:43.949 Total : 348.83 0.17 911325.88 370.27 1012197.00 00:05:43.949 00:05:43.949 [2024-12-13 09:16:56.208687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f79b0 (9): Bad file descriptor 00:05:43.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:43.949 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.949 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:43.949 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3164165 00:05:43.949 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3164165 00:05:44.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3164165) - No such process 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3164165 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3164165 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3164165 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.516 [2024-12-13 09:16:56.740233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3164841 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3164841 00:05:44.516 09:16:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:44.516 [2024-12-13 09:16:56.815100] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:45.084 09:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:45.084 09:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3164841 00:05:45.084 09:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:45.651 09:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:45.651 09:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3164841 00:05:45.651 09:16:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:45.909 09:16:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:45.909 09:16:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3164841 00:05:45.909 09:16:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:46.476 09:16:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:46.476 09:16:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3164841 00:05:46.476 09:16:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:47.042 09:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:47.043 09:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3164841 00:05:47.043 09:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:47.610 09:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:47.610 09:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3164841 00:05:47.610 09:16:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:47.889 Initializing NVMe Controllers 00:05:47.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:47.889 Controller IO queue size 128, less than required. 00:05:47.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:47.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:47.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:47.889 Initialization complete. Launching workers. 00:05:47.889 ======================================================== 00:05:47.889 Latency(us) 00:05:47.890 Device Information : IOPS MiB/s Average min max 00:05:47.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003568.08 1000152.44 1045067.92 00:05:47.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004747.80 1000151.65 1042426.88 00:05:47.890 ======================================================== 00:05:47.890 Total : 256.00 0.12 1004157.94 1000151.65 1045067.92 00:05:47.890 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3164841 00:05:48.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3164841) - No such process 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3164841 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:48.227 rmmod nvme_tcp 00:05:48.227 rmmod nvme_fabrics 00:05:48.227 rmmod nvme_keyring 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3164139 ']' 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3164139 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3164139 ']' 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3164139 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3164139 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3164139' 00:05:48.227 killing process with pid 3164139 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3164139 00:05:48.227 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3164139 00:05:48.514 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:48.514 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:48.514 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:48.514 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:05:48.514 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:05:48.514 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:48.514 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:05:48.514 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:48.514 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:48.514 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:48.514 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:48.514 09:17:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:50.426 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:50.426 00:05:50.426 real 0m15.711s 00:05:50.426 user 0m29.186s 00:05:50.426 sys 0m5.092s 00:05:50.426 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.426 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.426 ************************************ 00:05:50.426 END TEST nvmf_delete_subsystem 00:05:50.426 ************************************ 00:05:50.426 09:17:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:50.426 09:17:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:50.426 09:17:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.426 09:17:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:50.426 ************************************ 00:05:50.426 START TEST nvmf_host_management 00:05:50.426 ************************************ 00:05:50.426 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:50.685 * Looking for test storage... 00:05:50.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.685 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.685 --rc genhtml_branch_coverage=1 00:05:50.685 --rc genhtml_function_coverage=1 00:05:50.685 --rc genhtml_legend=1 00:05:50.685 --rc geninfo_all_blocks=1 00:05:50.686 --rc geninfo_unexecuted_blocks=1 00:05:50.686 00:05:50.686 ' 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.686 --rc genhtml_branch_coverage=1 00:05:50.686 --rc genhtml_function_coverage=1 00:05:50.686 --rc genhtml_legend=1 00:05:50.686 --rc geninfo_all_blocks=1 00:05:50.686 --rc geninfo_unexecuted_blocks=1 00:05:50.686 00:05:50.686 ' 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.686 --rc genhtml_branch_coverage=1 00:05:50.686 --rc genhtml_function_coverage=1 00:05:50.686 --rc genhtml_legend=1 00:05:50.686 --rc geninfo_all_blocks=1 00:05:50.686 --rc geninfo_unexecuted_blocks=1 00:05:50.686 00:05:50.686 ' 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.686 --rc genhtml_branch_coverage=1 00:05:50.686 --rc genhtml_function_coverage=1 00:05:50.686 --rc genhtml_legend=1 00:05:50.686 --rc geninfo_all_blocks=1 00:05:50.686 --rc geninfo_unexecuted_blocks=1 00:05:50.686 00:05:50.686 ' 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:50.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:05:50.686 09:17:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:55.960 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:55.960 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:05:55.960 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:55.960 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:55.960 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:55.960 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:55.960 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:55.960 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:05:55.960 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:55.960 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:05:55.960 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:05:55.960 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:05:55.960 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:55.961 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:55.961 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:55.961 Found net devices under 0000:af:00.0: cvl_0_0 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:55.961 Found net devices under 0000:af:00.1: cvl_0_1 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:55.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:55.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:05:55.961 00:05:55.961 --- 10.0.0.2 ping statistics --- 00:05:55.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:55.961 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:05:55.961 09:17:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:55.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:55.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:05:55.961 00:05:55.961 --- 10.0.0.1 ping statistics --- 00:05:55.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:55.961 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3168785 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3168785 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3168785 ']' 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:55.961 [2024-12-13 09:17:08.100523] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:55.961 [2024-12-13 09:17:08.100566] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:55.961 [2024-12-13 09:17:08.167727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.961 [2024-12-13 09:17:08.209140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:55.961 [2024-12-13 09:17:08.209176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:55.961 [2024-12-13 09:17:08.209183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:55.961 [2024-12-13 09:17:08.209188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:55.961 [2024-12-13 09:17:08.209193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:55.961 [2024-12-13 09:17:08.210545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.961 [2024-12-13 09:17:08.210632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.961 [2024-12-13 09:17:08.210761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:55.961 [2024-12-13 09:17:08.210761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:55.961 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:56.221 [2024-12-13 09:17:08.348152] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:56.221 Malloc0 00:05:56.221 [2024-12-13 09:17:08.417095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3169024 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3169024 /var/tmp/bdevperf.sock 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3169024 ']' 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:05:56.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:05:56.221 { 00:05:56.221 "params": { 00:05:56.221 "name": "Nvme$subsystem", 00:05:56.221 "trtype": "$TEST_TRANSPORT", 00:05:56.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:05:56.221 "adrfam": "ipv4", 00:05:56.221 "trsvcid": "$NVMF_PORT", 00:05:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:05:56.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:05:56.221 "hdgst": ${hdgst:-false}, 00:05:56.221 "ddgst": ${ddgst:-false} 00:05:56.221 }, 00:05:56.221 "method": "bdev_nvme_attach_controller" 00:05:56.221 } 00:05:56.221 EOF 00:05:56.221 )") 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:05:56.221 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:05:56.221 "params": { 00:05:56.221 "name": "Nvme0", 00:05:56.221 "trtype": "tcp", 00:05:56.221 "traddr": "10.0.0.2", 00:05:56.221 "adrfam": "ipv4", 00:05:56.221 "trsvcid": "4420", 00:05:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:05:56.221 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:05:56.221 "hdgst": false, 00:05:56.221 "ddgst": false 00:05:56.221 }, 00:05:56.221 "method": "bdev_nvme_attach_controller" 00:05:56.221 }' 00:05:56.221 [2024-12-13 09:17:08.510137] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:56.221 [2024-12-13 09:17:08.510182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3169024 ] 00:05:56.221 [2024-12-13 09:17:08.572642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.480 [2024-12-13 09:17:08.614022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.740 Running I/O for 10 seconds... 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:05:56.740 09:17:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:05:57.001 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:05:57.001 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:05:57.001 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:05:57.001 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:05:57.001 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.001 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:57.001 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.001 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:05:57.001 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:05:57.001 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:05:57.001 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:05:57.001 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:05:57.001 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:05:57.001 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.001 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:57.001 [2024-12-13 09:17:09.292337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.001 [2024-12-13 09:17:09.292830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.001 [2024-12-13 09:17:09.292836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.292844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.292850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.292858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.292864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.292872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.292878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.292886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.292892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.292900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.292906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.292914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.292920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.292934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.292941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.292948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.292955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.292963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.292969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.292976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.292983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.292990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.292997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:57.002 [2024-12-13 09:17:09.293318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.002 [2024-12-13 09:17:09.293325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39770 is same with the state(6) to be set 00:05:57.002 [2024-12-13 09:17:09.294261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:05:57.002 task offset: 100608 on job bdev=Nvme0n1 fails 00:05:57.002 00:05:57.002 Latency(us) 00:05:57.002 [2024-12-13T08:17:09.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:57.002 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:05:57.002 Job: Nvme0n1 ended in about 0.40 seconds with error 00:05:57.002 Verification LBA range: start 0x0 length 0x400 00:05:57.002 Nvme0n1 : 0.40 1921.72 120.11 160.14 0.00 29920.12 1466.76 26838.55 00:05:57.002 [2024-12-13T08:17:09.368Z] =================================================================================================================== 00:05:57.002 [2024-12-13T08:17:09.368Z] Total : 1921.72 120.11 160.14 0.00 29920.12 1466.76 26838.55 00:05:57.002 [2024-12-13 09:17:09.296606] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.002 [2024-12-13 09:17:09.296626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb207e0 (9): Bad file descriptor 00:05:57.002 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.002 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:05:57.002 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.002 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:57.003 [2024-12-13 09:17:09.299746] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:05:57.003 [2024-12-13 09:17:09.299823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:05:57.003 [2024-12-13 09:17:09.299845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:57.003 [2024-12-13 09:17:09.299859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:05:57.003 [2024-12-13 09:17:09.299866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:05:57.003 [2024-12-13 09:17:09.299873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:05:57.003 [2024-12-13 09:17:09.299880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb207e0 00:05:57.003 [2024-12-13 09:17:09.299897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb207e0 (9): Bad file descriptor 00:05:57.003 [2024-12-13 09:17:09.299909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:05:57.003 [2024-12-13 09:17:09.299916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:05:57.003 [2024-12-13 09:17:09.299924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:05:57.003 [2024-12-13 09:17:09.299937] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:05:57.003 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.003 09:17:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:05:58.380 09:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3169024 00:05:58.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3169024) - No such process 00:05:58.380 09:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:05:58.380 09:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:05:58.380 09:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:05:58.380 09:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:05:58.380 09:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:05:58.380 09:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:05:58.380 09:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:05:58.380 09:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:05:58.380 { 00:05:58.380 "params": { 00:05:58.380 "name": "Nvme$subsystem", 00:05:58.380 "trtype": "$TEST_TRANSPORT", 00:05:58.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:05:58.380 "adrfam": "ipv4", 00:05:58.380 "trsvcid": "$NVMF_PORT", 00:05:58.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:05:58.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:05:58.380 "hdgst": ${hdgst:-false}, 00:05:58.380 "ddgst": ${ddgst:-false} 00:05:58.380 }, 00:05:58.380 "method": "bdev_nvme_attach_controller" 00:05:58.380 } 00:05:58.380 EOF 00:05:58.380 )") 00:05:58.380 09:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:05:58.380 09:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:05:58.380 09:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:05:58.380 09:17:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:05:58.380 "params": { 00:05:58.380 "name": "Nvme0", 00:05:58.380 "trtype": "tcp", 00:05:58.380 "traddr": "10.0.0.2", 00:05:58.380 "adrfam": "ipv4", 00:05:58.380 "trsvcid": "4420", 00:05:58.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:05:58.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:05:58.380 "hdgst": false, 00:05:58.380 "ddgst": false 00:05:58.380 }, 00:05:58.380 "method": "bdev_nvme_attach_controller" 00:05:58.380 }' 00:05:58.380 [2024-12-13 09:17:10.360911] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:05:58.380 [2024-12-13 09:17:10.360959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3169288 ] 00:05:58.380 [2024-12-13 09:17:10.423612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.380 [2024-12-13 09:17:10.463846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.639 Running I/O for 1 seconds... 00:05:59.576 1984.00 IOPS, 124.00 MiB/s 00:05:59.576 Latency(us) 00:05:59.576 [2024-12-13T08:17:11.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:59.576 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:05:59.576 Verification LBA range: start 0x0 length 0x400 00:05:59.576 Nvme0n1 : 1.01 2042.48 127.66 0.00 0.00 30739.78 2012.89 27088.21 00:05:59.576 [2024-12-13T08:17:11.942Z] =================================================================================================================== 00:05:59.576 [2024-12-13T08:17:11.942Z] Total : 2042.48 127.66 0.00 0.00 30739.78 2012.89 27088.21 00:05:59.835 09:17:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:05:59.835 09:17:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:05:59.835 09:17:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:05:59.835 09:17:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:05:59.835 09:17:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:05:59.835 09:17:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:59.835 09:17:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:05:59.835 09:17:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:59.835 09:17:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:05:59.835 09:17:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:59.835 09:17:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:59.835 rmmod nvme_tcp 00:05:59.835 rmmod nvme_fabrics 00:05:59.835 rmmod nvme_keyring 00:05:59.835 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:59.835 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:05:59.835 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:05:59.835 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3168785 ']' 00:05:59.835 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3168785 00:05:59.835 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3168785 ']' 00:05:59.836 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3168785 00:05:59.836 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:05:59.836 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.836 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3168785 00:05:59.836 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:59.836 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:59.836 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3168785' 00:05:59.836 killing process with pid 3168785 00:05:59.836 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3168785 00:05:59.836 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3168785 00:06:00.095 [2024-12-13 09:17:12.260162] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:00.095 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:00.095 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:00.095 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:00.095 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:00.095 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:00.095 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:00.095 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:00.095 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:00.095 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:00.095 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:00.095 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:00.095 09:17:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:01.999 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:02.000 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:02.000 00:06:02.000 real 0m11.642s 00:06:02.000 user 0m20.018s 00:06:02.000 sys 0m4.869s 00:06:02.000 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.000 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.000 ************************************ 00:06:02.000 END TEST nvmf_host_management 00:06:02.000 ************************************ 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:02.259 ************************************ 00:06:02.259 START TEST nvmf_lvol 00:06:02.259 ************************************ 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:02.259 * Looking for test storage... 00:06:02.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.259 --rc genhtml_branch_coverage=1 00:06:02.259 --rc genhtml_function_coverage=1 00:06:02.259 --rc genhtml_legend=1 00:06:02.259 --rc geninfo_all_blocks=1 00:06:02.259 --rc geninfo_unexecuted_blocks=1 00:06:02.259 00:06:02.259 ' 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.259 --rc genhtml_branch_coverage=1 00:06:02.259 --rc genhtml_function_coverage=1 00:06:02.259 --rc genhtml_legend=1 00:06:02.259 --rc geninfo_all_blocks=1 00:06:02.259 --rc geninfo_unexecuted_blocks=1 00:06:02.259 00:06:02.259 ' 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.259 --rc genhtml_branch_coverage=1 00:06:02.259 --rc genhtml_function_coverage=1 00:06:02.259 --rc genhtml_legend=1 00:06:02.259 --rc geninfo_all_blocks=1 00:06:02.259 --rc geninfo_unexecuted_blocks=1 00:06:02.259 00:06:02.259 ' 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.259 --rc genhtml_branch_coverage=1 00:06:02.259 --rc genhtml_function_coverage=1 00:06:02.259 --rc genhtml_legend=1 00:06:02.259 --rc geninfo_all_blocks=1 00:06:02.259 --rc geninfo_unexecuted_blocks=1 00:06:02.259 00:06:02.259 ' 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:02.259 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:02.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:02.260 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:02.519 09:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:07.790 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:07.790 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:07.791 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:07.791 Found net devices under 0000:af:00.0: cvl_0_0 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:07.791 Found net devices under 0000:af:00.1: cvl_0_1 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:07.791 09:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:07.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:07.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:06:07.791 00:06:07.791 --- 10.0.0.2 ping statistics --- 00:06:07.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.791 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:07.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:07.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:06:07.791 00:06:07.791 --- 10.0.0.1 ping statistics --- 00:06:07.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.791 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3172998 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3172998 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3172998 ']' 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.791 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:08.054 [2024-12-13 09:17:20.195099] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:08.054 [2024-12-13 09:17:20.195145] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:08.054 [2024-12-13 09:17:20.262749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.054 [2024-12-13 09:17:20.304729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:08.055 [2024-12-13 09:17:20.304772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:08.055 [2024-12-13 09:17:20.304779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:08.055 [2024-12-13 09:17:20.304785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:08.055 [2024-12-13 09:17:20.304790] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:08.055 [2024-12-13 09:17:20.306062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.055 [2024-12-13 09:17:20.306158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.055 [2024-12-13 09:17:20.306161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.055 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.055 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:08.055 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:08.055 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:08.055 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:08.315 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:08.315 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:08.315 [2024-12-13 09:17:20.611856] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.315 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:08.572 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:08.572 09:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:08.829 09:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:08.830 09:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:09.087 09:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:09.087 09:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ee2db2c4-f88d-4b72-b34f-f7ece62645b6 00:06:09.087 09:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ee2db2c4-f88d-4b72-b34f-f7ece62645b6 lvol 20 00:06:09.344 09:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6695d4e3-3ab8-46a4-9f04-d2d188803664 00:06:09.344 09:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:09.601 09:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6695d4e3-3ab8-46a4-9f04-d2d188803664 00:06:09.858 09:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:09.858 [2024-12-13 09:17:22.190429] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:09.858 09:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:10.115 09:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3173479 00:06:10.115 09:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:10.115 09:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:11.487 09:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6695d4e3-3ab8-46a4-9f04-d2d188803664 MY_SNAPSHOT 00:06:11.487 09:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=622be89c-6fb5-4698-bae1-539e123f2411 00:06:11.487 09:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6695d4e3-3ab8-46a4-9f04-d2d188803664 30 00:06:11.745 09:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 622be89c-6fb5-4698-bae1-539e123f2411 MY_CLONE 00:06:12.003 09:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=07c85eb2-740b-4d1b-a0ec-9772deb30a70 00:06:12.003 09:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 07c85eb2-740b-4d1b-a0ec-9772deb30a70 00:06:12.570 09:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3173479 00:06:20.676 Initializing NVMe Controllers 00:06:20.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:20.676 Controller IO queue size 128, less than required. 00:06:20.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:20.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:20.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:20.676 Initialization complete. Launching workers. 00:06:20.676 ======================================================== 00:06:20.676 Latency(us) 00:06:20.676 Device Information : IOPS MiB/s Average min max 00:06:20.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12090.10 47.23 10592.81 1317.08 59968.59 00:06:20.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11979.00 46.79 10685.40 3528.16 56560.88 00:06:20.676 ======================================================== 00:06:20.676 Total : 24069.10 94.02 10638.89 1317.08 59968.59 00:06:20.676 00:06:20.676 09:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:20.934 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6695d4e3-3ab8-46a4-9f04-d2d188803664 00:06:20.934 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ee2db2c4-f88d-4b72-b34f-f7ece62645b6 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:21.192 rmmod nvme_tcp 00:06:21.192 rmmod nvme_fabrics 00:06:21.192 rmmod nvme_keyring 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3172998 ']' 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3172998 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3172998 ']' 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3172998 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.192 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3172998 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3172998' 00:06:21.451 killing process with pid 3172998 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3172998 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3172998 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:21.451 09:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.987 09:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:23.987 00:06:23.987 real 0m21.426s 00:06:23.987 user 1m2.824s 00:06:23.987 sys 0m7.234s 00:06:23.987 09:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.987 09:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:23.987 ************************************ 00:06:23.987 END TEST nvmf_lvol 00:06:23.987 ************************************ 00:06:23.987 09:17:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:23.987 09:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.987 09:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.987 09:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:23.987 ************************************ 00:06:23.988 START TEST nvmf_lvs_grow 00:06:23.988 ************************************ 00:06:23.988 09:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:23.988 * Looking for test storage... 00:06:23.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.988 --rc genhtml_branch_coverage=1 00:06:23.988 --rc genhtml_function_coverage=1 00:06:23.988 --rc genhtml_legend=1 00:06:23.988 --rc geninfo_all_blocks=1 00:06:23.988 --rc geninfo_unexecuted_blocks=1 00:06:23.988 00:06:23.988 ' 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.988 --rc genhtml_branch_coverage=1 00:06:23.988 --rc genhtml_function_coverage=1 00:06:23.988 --rc genhtml_legend=1 00:06:23.988 --rc geninfo_all_blocks=1 00:06:23.988 --rc geninfo_unexecuted_blocks=1 00:06:23.988 00:06:23.988 ' 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.988 --rc genhtml_branch_coverage=1 00:06:23.988 --rc genhtml_function_coverage=1 00:06:23.988 --rc genhtml_legend=1 00:06:23.988 --rc geninfo_all_blocks=1 00:06:23.988 --rc geninfo_unexecuted_blocks=1 00:06:23.988 00:06:23.988 ' 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.988 --rc genhtml_branch_coverage=1 00:06:23.988 --rc genhtml_function_coverage=1 00:06:23.988 --rc genhtml_legend=1 00:06:23.988 --rc geninfo_all_blocks=1 00:06:23.988 --rc geninfo_unexecuted_blocks=1 00:06:23.988 00:06:23.988 ' 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:23.988 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:23.989 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:23.989 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:23.989 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.989 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:23.989 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:23.989 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:23.989 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.989 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.989 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.989 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:23.989 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:23.989 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:23.989 09:17:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:29.261 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:29.261 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:29.261 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:29.262 Found net devices under 0000:af:00.0: cvl_0_0 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:29.262 Found net devices under 0000:af:00.1: cvl_0_1 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:29.262 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:29.520 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:29.520 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:29.520 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:29.520 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:29.520 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:29.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:29.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:06:29.521 00:06:29.521 --- 10.0.0.2 ping statistics --- 00:06:29.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.521 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:29.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:29.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:06:29.521 00:06:29.521 --- 10.0.0.1 ping statistics --- 00:06:29.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.521 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3178749 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3178749 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3178749 ']' 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.521 09:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:29.521 [2024-12-13 09:17:41.842271] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:29.521 [2024-12-13 09:17:41.842312] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.779 [2024-12-13 09:17:41.907134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.779 [2024-12-13 09:17:41.945209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:29.779 [2024-12-13 09:17:41.945255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:29.779 [2024-12-13 09:17:41.945261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:29.779 [2024-12-13 09:17:41.945267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:29.779 [2024-12-13 09:17:41.945272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:29.779 [2024-12-13 09:17:41.945773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.779 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.779 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:29.779 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:29.779 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.779 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:29.779 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.779 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:30.037 [2024-12-13 09:17:42.250572] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.037 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:30.037 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.037 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.037 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:30.037 ************************************ 00:06:30.037 START TEST lvs_grow_clean 00:06:30.037 ************************************ 00:06:30.037 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:30.037 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:30.037 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:30.037 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:30.037 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:30.037 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:30.037 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:30.037 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:30.037 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:30.037 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:30.296 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:30.296 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:30.554 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ec7845bb-d6d0-4187-8e19-0ed3432dccc1 00:06:30.554 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec7845bb-d6d0-4187-8e19-0ed3432dccc1 00:06:30.554 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:30.554 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:30.554 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:30.554 09:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ec7845bb-d6d0-4187-8e19-0ed3432dccc1 lvol 150 00:06:30.812 09:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=dae3e8cb-21b0-46aa-9125-e535bceb4daa 00:06:30.812 09:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:30.812 09:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:31.070 [2024-12-13 09:17:43.248222] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:31.070 [2024-12-13 09:17:43.248274] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:31.070 true 00:06:31.070 09:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec7845bb-d6d0-4187-8e19-0ed3432dccc1 00:06:31.071 09:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:31.329 09:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:31.329 09:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:31.329 09:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dae3e8cb-21b0-46aa-9125-e535bceb4daa 00:06:31.587 09:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:31.845 [2024-12-13 09:17:43.994443] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.845 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:31.845 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:31.845 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3179240 00:06:31.845 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:31.845 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3179240 /var/tmp/bdevperf.sock 00:06:31.845 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3179240 ']' 00:06:31.845 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:31.845 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.845 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:31.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:31.845 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.845 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:32.104 [2024-12-13 09:17:44.216536] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:32.104 [2024-12-13 09:17:44.216578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3179240 ] 00:06:32.104 [2024-12-13 09:17:44.279264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.104 [2024-12-13 09:17:44.320211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.104 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.104 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:32.104 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:32.362 Nvme0n1 00:06:32.362 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:32.620 [ 00:06:32.620 { 00:06:32.620 "name": "Nvme0n1", 00:06:32.620 "aliases": [ 00:06:32.620 "dae3e8cb-21b0-46aa-9125-e535bceb4daa" 00:06:32.620 ], 00:06:32.620 "product_name": "NVMe disk", 00:06:32.620 "block_size": 4096, 00:06:32.620 "num_blocks": 38912, 00:06:32.620 "uuid": "dae3e8cb-21b0-46aa-9125-e535bceb4daa", 00:06:32.620 "numa_id": 1, 00:06:32.620 "assigned_rate_limits": { 00:06:32.620 "rw_ios_per_sec": 0, 00:06:32.620 "rw_mbytes_per_sec": 0, 00:06:32.620 "r_mbytes_per_sec": 0, 00:06:32.620 "w_mbytes_per_sec": 0 00:06:32.620 }, 00:06:32.620 "claimed": false, 00:06:32.620 "zoned": false, 00:06:32.620 "supported_io_types": { 00:06:32.620 "read": true, 00:06:32.620 "write": true, 00:06:32.620 "unmap": true, 00:06:32.620 "flush": true, 00:06:32.620 "reset": true, 00:06:32.620 "nvme_admin": true, 00:06:32.620 "nvme_io": true, 00:06:32.620 "nvme_io_md": false, 00:06:32.620 "write_zeroes": true, 00:06:32.620 "zcopy": false, 00:06:32.620 "get_zone_info": false, 00:06:32.620 "zone_management": false, 00:06:32.620 "zone_append": false, 00:06:32.620 "compare": true, 00:06:32.620 "compare_and_write": true, 00:06:32.620 "abort": true, 00:06:32.620 "seek_hole": false, 00:06:32.620 "seek_data": false, 00:06:32.620 "copy": true, 00:06:32.620 "nvme_iov_md": false 00:06:32.620 }, 00:06:32.620 "memory_domains": [ 00:06:32.620 { 00:06:32.620 "dma_device_id": "system", 00:06:32.620 "dma_device_type": 1 00:06:32.620 } 00:06:32.620 ], 00:06:32.620 "driver_specific": { 00:06:32.620 "nvme": [ 00:06:32.620 { 00:06:32.620 "trid": { 00:06:32.620 "trtype": "TCP", 00:06:32.620 "adrfam": "IPv4", 00:06:32.620 "traddr": "10.0.0.2", 00:06:32.620 "trsvcid": "4420", 00:06:32.620 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:32.620 }, 00:06:32.620 "ctrlr_data": { 00:06:32.620 "cntlid": 1, 00:06:32.620 "vendor_id": "0x8086", 00:06:32.620 "model_number": "SPDK bdev Controller", 00:06:32.620 "serial_number": "SPDK0", 00:06:32.620 "firmware_revision": "25.01", 00:06:32.620 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:32.620 "oacs": { 00:06:32.620 "security": 0, 00:06:32.620 "format": 0, 00:06:32.620 "firmware": 0, 00:06:32.620 "ns_manage": 0 00:06:32.620 }, 00:06:32.620 "multi_ctrlr": true, 00:06:32.620 "ana_reporting": false 00:06:32.620 }, 00:06:32.620 "vs": { 00:06:32.620 "nvme_version": "1.3" 00:06:32.620 }, 00:06:32.620 "ns_data": { 00:06:32.620 "id": 1, 00:06:32.620 "can_share": true 00:06:32.620 } 00:06:32.620 } 00:06:32.620 ], 00:06:32.620 "mp_policy": "active_passive" 00:06:32.620 } 00:06:32.620 } 00:06:32.620 ] 00:06:32.620 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3179454 00:06:32.620 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:32.620 09:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:32.889 Running I/O for 10 seconds... 00:06:33.827 Latency(us) 00:06:33.827 [2024-12-13T08:17:46.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:33.827 Nvme0n1 : 1.00 22574.00 88.18 0.00 0.00 0.00 0.00 0.00 00:06:33.827 [2024-12-13T08:17:46.193Z] =================================================================================================================== 00:06:33.827 [2024-12-13T08:17:46.193Z] Total : 22574.00 88.18 0.00 0.00 0.00 0.00 0.00 00:06:33.827 00:06:34.761 09:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ec7845bb-d6d0-4187-8e19-0ed3432dccc1 00:06:34.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:34.762 Nvme0n1 : 2.00 22671.00 88.56 0.00 0.00 0.00 0.00 0.00 00:06:34.762 [2024-12-13T08:17:47.128Z] =================================================================================================================== 00:06:34.762 [2024-12-13T08:17:47.128Z] Total : 22671.00 88.56 0.00 0.00 0.00 0.00 0.00 00:06:34.762 00:06:34.762 true 00:06:34.762 09:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec7845bb-d6d0-4187-8e19-0ed3432dccc1 00:06:34.762 09:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:35.019 09:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:35.019 09:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:35.019 09:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3179454 00:06:35.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:35.954 Nvme0n1 : 3.00 22684.67 88.61 0.00 0.00 0.00 0.00 0.00 00:06:35.954 [2024-12-13T08:17:48.320Z] =================================================================================================================== 00:06:35.954 [2024-12-13T08:17:48.320Z] Total : 22684.67 88.61 0.00 0.00 0.00 0.00 0.00 00:06:35.954 00:06:36.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:36.888 Nvme0n1 : 4.00 22729.50 88.79 0.00 0.00 0.00 0.00 0.00 00:06:36.888 [2024-12-13T08:17:49.254Z] =================================================================================================================== 00:06:36.888 [2024-12-13T08:17:49.254Z] Total : 22729.50 88.79 0.00 0.00 0.00 0.00 0.00 00:06:36.888 00:06:37.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:37.821 Nvme0n1 : 5.00 22764.40 88.92 0.00 0.00 0.00 0.00 0.00 00:06:37.821 [2024-12-13T08:17:50.187Z] =================================================================================================================== 00:06:37.821 [2024-12-13T08:17:50.187Z] Total : 22764.40 88.92 0.00 0.00 0.00 0.00 0.00 00:06:37.821 00:06:38.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:38.756 Nvme0n1 : 6.00 22721.00 88.75 0.00 0.00 0.00 0.00 0.00 00:06:38.756 [2024-12-13T08:17:51.122Z] =================================================================================================================== 00:06:38.756 [2024-12-13T08:17:51.122Z] Total : 22721.00 88.75 0.00 0.00 0.00 0.00 0.00 00:06:38.756 00:06:39.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:39.689 Nvme0n1 : 7.00 22752.86 88.88 0.00 0.00 0.00 0.00 0.00 00:06:39.689 [2024-12-13T08:17:52.055Z] =================================================================================================================== 00:06:39.689 [2024-12-13T08:17:52.055Z] Total : 22752.86 88.88 0.00 0.00 0.00 0.00 0.00 00:06:39.689 00:06:41.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:41.065 Nvme0n1 : 8.00 22775.75 88.97 0.00 0.00 0.00 0.00 0.00 00:06:41.065 [2024-12-13T08:17:53.431Z] =================================================================================================================== 00:06:41.065 [2024-12-13T08:17:53.431Z] Total : 22775.75 88.97 0.00 0.00 0.00 0.00 0.00 00:06:41.065 00:06:42.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:42.000 Nvme0n1 : 9.00 22800.67 89.07 0.00 0.00 0.00 0.00 0.00 00:06:42.000 [2024-12-13T08:17:54.366Z] =================================================================================================================== 00:06:42.000 [2024-12-13T08:17:54.366Z] Total : 22800.67 89.07 0.00 0.00 0.00 0.00 0.00 00:06:42.000 00:06:43.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:43.014 Nvme0n1 : 10.00 22815.00 89.12 0.00 0.00 0.00 0.00 0.00 00:06:43.014 [2024-12-13T08:17:55.380Z] =================================================================================================================== 00:06:43.014 [2024-12-13T08:17:55.380Z] Total : 22815.00 89.12 0.00 0.00 0.00 0.00 0.00 00:06:43.014 00:06:43.014 00:06:43.014 Latency(us) 00:06:43.014 [2024-12-13T08:17:55.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:43.014 Nvme0n1 : 10.01 22815.16 89.12 0.00 0.00 5606.22 2496.61 8051.57 00:06:43.014 [2024-12-13T08:17:55.380Z] =================================================================================================================== 00:06:43.014 [2024-12-13T08:17:55.380Z] Total : 22815.16 89.12 0.00 0.00 5606.22 2496.61 8051.57 00:06:43.014 { 00:06:43.014 "results": [ 00:06:43.014 { 00:06:43.014 "job": "Nvme0n1", 00:06:43.014 "core_mask": "0x2", 00:06:43.014 "workload": "randwrite", 00:06:43.014 "status": "finished", 00:06:43.014 "queue_depth": 128, 00:06:43.014 "io_size": 4096, 00:06:43.014 "runtime": 10.005541, 00:06:43.014 "iops": 22815.15812088522, 00:06:43.014 "mibps": 89.12171140970788, 00:06:43.014 "io_failed": 0, 00:06:43.014 "io_timeout": 0, 00:06:43.014 "avg_latency_us": 5606.223916886637, 00:06:43.014 "min_latency_us": 2496.609523809524, 00:06:43.014 "max_latency_us": 8051.565714285714 00:06:43.014 } 00:06:43.014 ], 00:06:43.014 "core_count": 1 00:06:43.014 } 00:06:43.014 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3179240 00:06:43.014 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3179240 ']' 00:06:43.014 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3179240 00:06:43.014 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:43.014 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.014 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3179240 00:06:43.014 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:43.014 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:43.014 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3179240' 00:06:43.014 killing process with pid 3179240 00:06:43.014 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3179240 00:06:43.014 Received shutdown signal, test time was about 10.000000 seconds 00:06:43.014 00:06:43.014 Latency(us) 00:06:43.014 [2024-12-13T08:17:55.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.014 [2024-12-13T08:17:55.380Z] =================================================================================================================== 00:06:43.014 [2024-12-13T08:17:55.380Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:43.014 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3179240 00:06:43.014 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:43.301 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:43.573 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:43.573 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec7845bb-d6d0-4187-8e19-0ed3432dccc1 00:06:43.573 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:43.573 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:43.573 09:17:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:43.832 [2024-12-13 09:17:56.020044] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:43.832 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec7845bb-d6d0-4187-8e19-0ed3432dccc1 00:06:43.832 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:06:43.832 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec7845bb-d6d0-4187-8e19-0ed3432dccc1 00:06:43.832 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.832 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.832 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.832 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.832 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.832 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.832 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.832 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:43.832 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec7845bb-d6d0-4187-8e19-0ed3432dccc1 00:06:44.100 request: 00:06:44.100 { 00:06:44.100 "uuid": "ec7845bb-d6d0-4187-8e19-0ed3432dccc1", 00:06:44.100 "method": "bdev_lvol_get_lvstores", 00:06:44.100 "req_id": 1 00:06:44.100 } 00:06:44.100 Got JSON-RPC error response 00:06:44.100 response: 00:06:44.100 { 00:06:44.100 "code": -19, 00:06:44.100 "message": "No such device" 00:06:44.100 } 00:06:44.100 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:06:44.100 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.100 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.100 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.100 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:44.100 aio_bdev 00:06:44.100 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dae3e8cb-21b0-46aa-9125-e535bceb4daa 00:06:44.100 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=dae3e8cb-21b0-46aa-9125-e535bceb4daa 00:06:44.100 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:44.100 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:06:44.100 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:44.100 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:44.100 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:44.358 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dae3e8cb-21b0-46aa-9125-e535bceb4daa -t 2000 00:06:44.615 [ 00:06:44.615 { 00:06:44.615 "name": "dae3e8cb-21b0-46aa-9125-e535bceb4daa", 00:06:44.615 "aliases": [ 00:06:44.615 "lvs/lvol" 00:06:44.615 ], 00:06:44.615 "product_name": "Logical Volume", 00:06:44.615 "block_size": 4096, 00:06:44.615 "num_blocks": 38912, 00:06:44.615 "uuid": "dae3e8cb-21b0-46aa-9125-e535bceb4daa", 00:06:44.615 "assigned_rate_limits": { 00:06:44.615 "rw_ios_per_sec": 0, 00:06:44.615 "rw_mbytes_per_sec": 0, 00:06:44.615 "r_mbytes_per_sec": 0, 00:06:44.615 "w_mbytes_per_sec": 0 00:06:44.615 }, 00:06:44.615 "claimed": false, 00:06:44.615 "zoned": false, 00:06:44.615 "supported_io_types": { 00:06:44.615 "read": true, 00:06:44.615 "write": true, 00:06:44.615 "unmap": true, 00:06:44.615 "flush": false, 00:06:44.615 "reset": true, 00:06:44.615 "nvme_admin": false, 00:06:44.615 "nvme_io": false, 00:06:44.615 "nvme_io_md": false, 00:06:44.615 "write_zeroes": true, 00:06:44.615 "zcopy": false, 00:06:44.615 "get_zone_info": false, 00:06:44.615 "zone_management": false, 00:06:44.615 "zone_append": false, 00:06:44.615 "compare": false, 00:06:44.615 "compare_and_write": false, 00:06:44.615 "abort": false, 00:06:44.615 "seek_hole": true, 00:06:44.615 "seek_data": true, 00:06:44.615 "copy": false, 00:06:44.615 "nvme_iov_md": false 00:06:44.615 }, 00:06:44.615 "driver_specific": { 00:06:44.616 "lvol": { 00:06:44.616 "lvol_store_uuid": "ec7845bb-d6d0-4187-8e19-0ed3432dccc1", 00:06:44.616 "base_bdev": "aio_bdev", 00:06:44.616 "thin_provision": false, 00:06:44.616 "num_allocated_clusters": 38, 00:06:44.616 "snapshot": false, 00:06:44.616 "clone": false, 00:06:44.616 "esnap_clone": false 00:06:44.616 } 00:06:44.616 } 00:06:44.616 } 00:06:44.616 ] 00:06:44.616 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:06:44.616 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec7845bb-d6d0-4187-8e19-0ed3432dccc1 00:06:44.616 09:17:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:44.873 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:44.873 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:44.873 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec7845bb-d6d0-4187-8e19-0ed3432dccc1 00:06:44.873 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:44.873 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dae3e8cb-21b0-46aa-9125-e535bceb4daa 00:06:45.131 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec7845bb-d6d0-4187-8e19-0ed3432dccc1 00:06:45.388 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:45.388 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:45.647 00:06:45.647 real 0m15.491s 00:06:45.647 user 0m14.911s 00:06:45.647 sys 0m1.588s 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:45.647 ************************************ 00:06:45.647 END TEST lvs_grow_clean 00:06:45.647 ************************************ 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:45.647 ************************************ 00:06:45.647 START TEST lvs_grow_dirty 00:06:45.647 ************************************ 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:45.647 09:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:45.905 09:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:45.905 09:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:46.164 09:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3ac6cf73-289f-4171-9e76-629bfab6ea6f 00:06:46.164 09:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ac6cf73-289f-4171-9e76-629bfab6ea6f 00:06:46.164 09:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:46.164 09:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:46.164 09:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:46.164 09:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3ac6cf73-289f-4171-9e76-629bfab6ea6f lvol 150 00:06:46.423 09:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=bddf24c3-0fcb-4f72-bf39-ec6300b74ade 00:06:46.423 09:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:46.423 09:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:46.681 [2024-12-13 09:17:58.823185] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:46.681 [2024-12-13 09:17:58.823235] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:46.681 true 00:06:46.681 09:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ac6cf73-289f-4171-9e76-629bfab6ea6f 00:06:46.681 09:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:46.681 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:46.681 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:46.938 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bddf24c3-0fcb-4f72-bf39-ec6300b74ade 00:06:47.196 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:47.196 [2024-12-13 09:17:59.545320] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.196 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.455 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:47.455 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3181928 00:06:47.455 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:47.455 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3181928 /var/tmp/bdevperf.sock 00:06:47.455 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3181928 ']' 00:06:47.455 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:47.455 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.455 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:47.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:47.455 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.455 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:47.455 [2024-12-13 09:17:59.761183] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:47.455 [2024-12-13 09:17:59.761227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181928 ] 00:06:47.713 [2024-12-13 09:17:59.825499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.713 [2024-12-13 09:17:59.867196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.713 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.713 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:06:47.713 09:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:47.971 Nvme0n1 00:06:48.229 09:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:48.229 [ 00:06:48.229 { 00:06:48.229 "name": "Nvme0n1", 00:06:48.229 "aliases": [ 00:06:48.229 "bddf24c3-0fcb-4f72-bf39-ec6300b74ade" 00:06:48.229 ], 00:06:48.229 "product_name": "NVMe disk", 00:06:48.229 "block_size": 4096, 00:06:48.229 "num_blocks": 38912, 00:06:48.229 "uuid": "bddf24c3-0fcb-4f72-bf39-ec6300b74ade", 00:06:48.229 "numa_id": 1, 00:06:48.229 "assigned_rate_limits": { 00:06:48.229 "rw_ios_per_sec": 0, 00:06:48.229 "rw_mbytes_per_sec": 0, 00:06:48.229 "r_mbytes_per_sec": 0, 00:06:48.229 "w_mbytes_per_sec": 0 00:06:48.229 }, 00:06:48.229 "claimed": false, 00:06:48.229 "zoned": false, 00:06:48.229 "supported_io_types": { 00:06:48.229 "read": true, 00:06:48.229 "write": true, 00:06:48.229 "unmap": true, 00:06:48.229 "flush": true, 00:06:48.229 "reset": true, 00:06:48.229 "nvme_admin": true, 00:06:48.229 "nvme_io": true, 00:06:48.229 "nvme_io_md": false, 00:06:48.229 "write_zeroes": true, 00:06:48.229 "zcopy": false, 00:06:48.229 "get_zone_info": false, 00:06:48.229 "zone_management": false, 00:06:48.229 "zone_append": false, 00:06:48.229 "compare": true, 00:06:48.229 "compare_and_write": true, 00:06:48.229 "abort": true, 00:06:48.229 "seek_hole": false, 00:06:48.229 "seek_data": false, 00:06:48.229 "copy": true, 00:06:48.229 "nvme_iov_md": false 00:06:48.229 }, 00:06:48.229 "memory_domains": [ 00:06:48.229 { 00:06:48.229 "dma_device_id": "system", 00:06:48.229 "dma_device_type": 1 00:06:48.229 } 00:06:48.229 ], 00:06:48.229 "driver_specific": { 00:06:48.229 "nvme": [ 00:06:48.229 { 00:06:48.229 "trid": { 00:06:48.229 "trtype": "TCP", 00:06:48.229 "adrfam": "IPv4", 00:06:48.229 "traddr": "10.0.0.2", 00:06:48.229 "trsvcid": "4420", 00:06:48.229 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:48.229 }, 00:06:48.229 "ctrlr_data": { 00:06:48.229 "cntlid": 1, 00:06:48.229 "vendor_id": "0x8086", 00:06:48.229 "model_number": "SPDK bdev Controller", 00:06:48.229 "serial_number": "SPDK0", 00:06:48.229 "firmware_revision": "25.01", 00:06:48.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:48.229 "oacs": { 00:06:48.229 "security": 0, 00:06:48.229 "format": 0, 00:06:48.229 "firmware": 0, 00:06:48.229 "ns_manage": 0 00:06:48.229 }, 00:06:48.229 "multi_ctrlr": true, 00:06:48.229 "ana_reporting": false 00:06:48.229 }, 00:06:48.229 "vs": { 00:06:48.229 "nvme_version": "1.3" 00:06:48.229 }, 00:06:48.229 "ns_data": { 00:06:48.229 "id": 1, 00:06:48.229 "can_share": true 00:06:48.229 } 00:06:48.229 } 00:06:48.229 ], 00:06:48.229 "mp_policy": "active_passive" 00:06:48.229 } 00:06:48.229 } 00:06:48.229 ] 00:06:48.229 09:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3182033 00:06:48.229 09:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:48.229 09:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:48.487 Running I/O for 10 seconds... 00:06:49.421 Latency(us) 00:06:49.421 [2024-12-13T08:18:01.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.421 Nvme0n1 : 1.00 23540.00 91.95 0.00 0.00 0.00 0.00 0.00 00:06:49.421 [2024-12-13T08:18:01.787Z] =================================================================================================================== 00:06:49.421 [2024-12-13T08:18:01.787Z] Total : 23540.00 91.95 0.00 0.00 0.00 0.00 0.00 00:06:49.421 00:06:50.355 09:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3ac6cf73-289f-4171-9e76-629bfab6ea6f 00:06:50.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.355 Nvme0n1 : 2.00 23622.00 92.27 0.00 0.00 0.00 0.00 0.00 00:06:50.355 [2024-12-13T08:18:02.721Z] =================================================================================================================== 00:06:50.355 [2024-12-13T08:18:02.721Z] Total : 23622.00 92.27 0.00 0.00 0.00 0.00 0.00 00:06:50.355 00:06:50.612 true 00:06:50.613 09:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ac6cf73-289f-4171-9e76-629bfab6ea6f 00:06:50.613 09:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:50.613 09:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:50.613 09:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:50.613 09:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3182033 00:06:51.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.546 Nvme0n1 : 3.00 23593.33 92.16 0.00 0.00 0.00 0.00 0.00 00:06:51.546 [2024-12-13T08:18:03.913Z] =================================================================================================================== 00:06:51.547 [2024-12-13T08:18:03.913Z] Total : 23593.33 92.16 0.00 0.00 0.00 0.00 0.00 00:06:51.547 00:06:52.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.480 Nvme0n1 : 4.00 23618.00 92.26 0.00 0.00 0.00 0.00 0.00 00:06:52.480 [2024-12-13T08:18:04.846Z] =================================================================================================================== 00:06:52.480 [2024-12-13T08:18:04.846Z] Total : 23618.00 92.26 0.00 0.00 0.00 0.00 0.00 00:06:52.480 00:06:53.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.413 Nvme0n1 : 5.00 23648.60 92.38 0.00 0.00 0.00 0.00 0.00 00:06:53.413 [2024-12-13T08:18:05.779Z] =================================================================================================================== 00:06:53.413 [2024-12-13T08:18:05.779Z] Total : 23648.60 92.38 0.00 0.00 0.00 0.00 0.00 00:06:53.413 00:06:54.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.348 Nvme0n1 : 6.00 23646.00 92.37 0.00 0.00 0.00 0.00 0.00 00:06:54.348 [2024-12-13T08:18:06.714Z] =================================================================================================================== 00:06:54.348 [2024-12-13T08:18:06.714Z] Total : 23646.00 92.37 0.00 0.00 0.00 0.00 0.00 00:06:54.348 00:06:55.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.284 Nvme0n1 : 7.00 23661.14 92.43 0.00 0.00 0.00 0.00 0.00 00:06:55.284 [2024-12-13T08:18:07.650Z] =================================================================================================================== 00:06:55.284 [2024-12-13T08:18:07.650Z] Total : 23661.14 92.43 0.00 0.00 0.00 0.00 0.00 00:06:55.284 00:06:56.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.658 Nvme0n1 : 8.00 23680.12 92.50 0.00 0.00 0.00 0.00 0.00 00:06:56.658 [2024-12-13T08:18:09.024Z] =================================================================================================================== 00:06:56.658 [2024-12-13T08:18:09.024Z] Total : 23680.12 92.50 0.00 0.00 0.00 0.00 0.00 00:06:56.658 00:06:57.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.592 Nvme0n1 : 9.00 23716.89 92.64 0.00 0.00 0.00 0.00 0.00 00:06:57.592 [2024-12-13T08:18:09.958Z] =================================================================================================================== 00:06:57.592 [2024-12-13T08:18:09.958Z] Total : 23716.89 92.64 0.00 0.00 0.00 0.00 0.00 00:06:57.592 00:06:58.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.526 Nvme0n1 : 10.00 23718.10 92.65 0.00 0.00 0.00 0.00 0.00 00:06:58.526 [2024-12-13T08:18:10.892Z] =================================================================================================================== 00:06:58.526 [2024-12-13T08:18:10.892Z] Total : 23718.10 92.65 0.00 0.00 0.00 0.00 0.00 00:06:58.526 00:06:58.526 00:06:58.526 Latency(us) 00:06:58.526 [2024-12-13T08:18:10.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.526 Nvme0n1 : 10.01 23718.83 92.65 0.00 0.00 5393.49 1458.96 10111.27 00:06:58.526 [2024-12-13T08:18:10.892Z] =================================================================================================================== 00:06:58.526 [2024-12-13T08:18:10.892Z] Total : 23718.83 92.65 0.00 0.00 5393.49 1458.96 10111.27 00:06:58.526 { 00:06:58.526 "results": [ 00:06:58.526 { 00:06:58.526 "job": "Nvme0n1", 00:06:58.526 "core_mask": "0x2", 00:06:58.526 "workload": "randwrite", 00:06:58.526 "status": "finished", 00:06:58.526 "queue_depth": 128, 00:06:58.526 "io_size": 4096, 00:06:58.526 "runtime": 10.005088, 00:06:58.526 "iops": 23718.83185835047, 00:06:58.526 "mibps": 92.65168694668152, 00:06:58.526 "io_failed": 0, 00:06:58.526 "io_timeout": 0, 00:06:58.526 "avg_latency_us": 5393.488990801425, 00:06:58.526 "min_latency_us": 1458.9561904761904, 00:06:58.526 "max_latency_us": 10111.26857142857 00:06:58.526 } 00:06:58.526 ], 00:06:58.526 "core_count": 1 00:06:58.526 } 00:06:58.526 09:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3181928 00:06:58.526 09:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3181928 ']' 00:06:58.526 09:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3181928 00:06:58.526 09:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:06:58.526 09:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.527 09:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3181928 00:06:58.527 09:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:58.527 09:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:58.527 09:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3181928' 00:06:58.527 killing process with pid 3181928 00:06:58.527 09:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3181928 00:06:58.527 Received shutdown signal, test time was about 10.000000 seconds 00:06:58.527 00:06:58.527 Latency(us) 00:06:58.527 [2024-12-13T08:18:10.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.527 [2024-12-13T08:18:10.893Z] =================================================================================================================== 00:06:58.527 [2024-12-13T08:18:10.893Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:58.527 09:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3181928 00:06:58.527 09:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:58.785 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:59.043 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ac6cf73-289f-4171-9e76-629bfab6ea6f 00:06:59.043 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3178749 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3178749 00:06:59.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3178749 Killed "${NVMF_APP[@]}" "$@" 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3184316 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3184316 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3184316 ']' 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.301 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:59.301 [2024-12-13 09:18:11.543329] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:06:59.301 [2024-12-13 09:18:11.543376] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.301 [2024-12-13 09:18:11.610419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.301 [2024-12-13 09:18:11.650561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:59.301 [2024-12-13 09:18:11.650598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:59.301 [2024-12-13 09:18:11.650605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:59.301 [2024-12-13 09:18:11.650611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:59.301 [2024-12-13 09:18:11.650618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:59.301 [2024-12-13 09:18:11.651093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.560 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.560 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:06:59.560 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:59.560 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:59.560 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:59.560 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:59.560 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:59.818 [2024-12-13 09:18:11.953564] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:06:59.818 [2024-12-13 09:18:11.953648] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:06:59.818 [2024-12-13 09:18:11.953673] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:06:59.818 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:06:59.818 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev bddf24c3-0fcb-4f72-bf39-ec6300b74ade 00:06:59.818 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=bddf24c3-0fcb-4f72-bf39-ec6300b74ade 00:06:59.818 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:59.818 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:06:59.818 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:59.818 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:59.818 09:18:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:59.818 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bddf24c3-0fcb-4f72-bf39-ec6300b74ade -t 2000 00:07:00.075 [ 00:07:00.075 { 00:07:00.075 "name": "bddf24c3-0fcb-4f72-bf39-ec6300b74ade", 00:07:00.075 "aliases": [ 00:07:00.075 "lvs/lvol" 00:07:00.075 ], 00:07:00.075 "product_name": "Logical Volume", 00:07:00.075 "block_size": 4096, 00:07:00.075 "num_blocks": 38912, 00:07:00.075 "uuid": "bddf24c3-0fcb-4f72-bf39-ec6300b74ade", 00:07:00.075 "assigned_rate_limits": { 00:07:00.075 "rw_ios_per_sec": 0, 00:07:00.075 "rw_mbytes_per_sec": 0, 00:07:00.075 "r_mbytes_per_sec": 0, 00:07:00.075 "w_mbytes_per_sec": 0 00:07:00.075 }, 00:07:00.075 "claimed": false, 00:07:00.075 "zoned": false, 00:07:00.075 "supported_io_types": { 00:07:00.075 "read": true, 00:07:00.075 "write": true, 00:07:00.075 "unmap": true, 00:07:00.075 "flush": false, 00:07:00.075 "reset": true, 00:07:00.075 "nvme_admin": false, 00:07:00.075 "nvme_io": false, 00:07:00.075 "nvme_io_md": false, 00:07:00.075 "write_zeroes": true, 00:07:00.075 "zcopy": false, 00:07:00.075 "get_zone_info": false, 00:07:00.075 "zone_management": false, 00:07:00.075 "zone_append": false, 00:07:00.075 "compare": false, 00:07:00.075 "compare_and_write": false, 00:07:00.075 "abort": false, 00:07:00.075 "seek_hole": true, 00:07:00.075 "seek_data": true, 00:07:00.075 "copy": false, 00:07:00.075 "nvme_iov_md": false 00:07:00.075 }, 00:07:00.075 "driver_specific": { 00:07:00.075 "lvol": { 00:07:00.075 "lvol_store_uuid": "3ac6cf73-289f-4171-9e76-629bfab6ea6f", 00:07:00.075 "base_bdev": "aio_bdev", 00:07:00.075 "thin_provision": false, 00:07:00.075 "num_allocated_clusters": 38, 00:07:00.075 "snapshot": false, 00:07:00.075 "clone": false, 00:07:00.075 "esnap_clone": false 00:07:00.075 } 00:07:00.075 } 00:07:00.075 } 00:07:00.075 ] 00:07:00.076 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:00.076 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ac6cf73-289f-4171-9e76-629bfab6ea6f 00:07:00.076 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:00.333 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:00.333 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ac6cf73-289f-4171-9e76-629bfab6ea6f 00:07:00.333 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:00.333 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:00.333 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:00.591 [2024-12-13 09:18:12.846330] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:00.591 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ac6cf73-289f-4171-9e76-629bfab6ea6f 00:07:00.591 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:00.591 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ac6cf73-289f-4171-9e76-629bfab6ea6f 00:07:00.591 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.591 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.591 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.591 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.591 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.591 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.591 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.591 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:00.591 09:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ac6cf73-289f-4171-9e76-629bfab6ea6f 00:07:00.849 request: 00:07:00.849 { 00:07:00.849 "uuid": "3ac6cf73-289f-4171-9e76-629bfab6ea6f", 00:07:00.849 "method": "bdev_lvol_get_lvstores", 00:07:00.849 "req_id": 1 00:07:00.849 } 00:07:00.849 Got JSON-RPC error response 00:07:00.849 response: 00:07:00.849 { 00:07:00.849 "code": -19, 00:07:00.849 "message": "No such device" 00:07:00.849 } 00:07:00.849 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:00.849 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.849 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.849 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.849 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:01.106 aio_bdev 00:07:01.106 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bddf24c3-0fcb-4f72-bf39-ec6300b74ade 00:07:01.106 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=bddf24c3-0fcb-4f72-bf39-ec6300b74ade 00:07:01.107 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:01.107 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:01.107 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:01.107 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:01.107 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:01.107 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bddf24c3-0fcb-4f72-bf39-ec6300b74ade -t 2000 00:07:01.366 [ 00:07:01.366 { 00:07:01.366 "name": "bddf24c3-0fcb-4f72-bf39-ec6300b74ade", 00:07:01.366 "aliases": [ 00:07:01.366 "lvs/lvol" 00:07:01.366 ], 00:07:01.366 "product_name": "Logical Volume", 00:07:01.366 "block_size": 4096, 00:07:01.366 "num_blocks": 38912, 00:07:01.366 "uuid": "bddf24c3-0fcb-4f72-bf39-ec6300b74ade", 00:07:01.366 "assigned_rate_limits": { 00:07:01.366 "rw_ios_per_sec": 0, 00:07:01.366 "rw_mbytes_per_sec": 0, 00:07:01.366 "r_mbytes_per_sec": 0, 00:07:01.366 "w_mbytes_per_sec": 0 00:07:01.366 }, 00:07:01.366 "claimed": false, 00:07:01.366 "zoned": false, 00:07:01.366 "supported_io_types": { 00:07:01.366 "read": true, 00:07:01.366 "write": true, 00:07:01.366 "unmap": true, 00:07:01.366 "flush": false, 00:07:01.366 "reset": true, 00:07:01.366 "nvme_admin": false, 00:07:01.366 "nvme_io": false, 00:07:01.366 "nvme_io_md": false, 00:07:01.366 "write_zeroes": true, 00:07:01.366 "zcopy": false, 00:07:01.366 "get_zone_info": false, 00:07:01.366 "zone_management": false, 00:07:01.366 "zone_append": false, 00:07:01.366 "compare": false, 00:07:01.366 "compare_and_write": false, 00:07:01.366 "abort": false, 00:07:01.366 "seek_hole": true, 00:07:01.366 "seek_data": true, 00:07:01.366 "copy": false, 00:07:01.366 "nvme_iov_md": false 00:07:01.366 }, 00:07:01.366 "driver_specific": { 00:07:01.366 "lvol": { 00:07:01.366 "lvol_store_uuid": "3ac6cf73-289f-4171-9e76-629bfab6ea6f", 00:07:01.366 "base_bdev": "aio_bdev", 00:07:01.366 "thin_provision": false, 00:07:01.366 "num_allocated_clusters": 38, 00:07:01.366 "snapshot": false, 00:07:01.366 "clone": false, 00:07:01.366 "esnap_clone": false 00:07:01.366 } 00:07:01.366 } 00:07:01.366 } 00:07:01.366 ] 00:07:01.366 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:01.366 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ac6cf73-289f-4171-9e76-629bfab6ea6f 00:07:01.366 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:01.626 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:01.626 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3ac6cf73-289f-4171-9e76-629bfab6ea6f 00:07:01.626 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:01.626 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:01.626 09:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bddf24c3-0fcb-4f72-bf39-ec6300b74ade 00:07:01.884 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3ac6cf73-289f-4171-9e76-629bfab6ea6f 00:07:02.141 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:02.399 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.399 00:07:02.399 real 0m16.740s 00:07:02.399 user 0m43.432s 00:07:02.399 sys 0m3.738s 00:07:02.399 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.399 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:02.399 ************************************ 00:07:02.399 END TEST lvs_grow_dirty 00:07:02.399 ************************************ 00:07:02.399 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:02.399 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:02.399 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:02.399 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:02.399 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:02.399 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:02.399 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:02.399 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:02.399 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:02.399 nvmf_trace.0 00:07:02.399 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:02.400 rmmod nvme_tcp 00:07:02.400 rmmod nvme_fabrics 00:07:02.400 rmmod nvme_keyring 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3184316 ']' 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3184316 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3184316 ']' 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3184316 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.400 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3184316 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3184316' 00:07:02.658 killing process with pid 3184316 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3184316 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3184316 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:02.658 09:18:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.192 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:05.192 00:07:05.192 real 0m41.106s 00:07:05.192 user 1m3.822s 00:07:05.192 sys 0m9.950s 00:07:05.192 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.192 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:05.192 ************************************ 00:07:05.192 END TEST nvmf_lvs_grow 00:07:05.192 ************************************ 00:07:05.192 09:18:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:05.192 09:18:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.192 09:18:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.192 09:18:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:05.192 ************************************ 00:07:05.192 START TEST nvmf_bdev_io_wait 00:07:05.192 ************************************ 00:07:05.192 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:05.192 * Looking for test storage... 00:07:05.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.192 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:05.192 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:05.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.193 --rc genhtml_branch_coverage=1 00:07:05.193 --rc genhtml_function_coverage=1 00:07:05.193 --rc genhtml_legend=1 00:07:05.193 --rc geninfo_all_blocks=1 00:07:05.193 --rc geninfo_unexecuted_blocks=1 00:07:05.193 00:07:05.193 ' 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:05.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.193 --rc genhtml_branch_coverage=1 00:07:05.193 --rc genhtml_function_coverage=1 00:07:05.193 --rc genhtml_legend=1 00:07:05.193 --rc geninfo_all_blocks=1 00:07:05.193 --rc geninfo_unexecuted_blocks=1 00:07:05.193 00:07:05.193 ' 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:05.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.193 --rc genhtml_branch_coverage=1 00:07:05.193 --rc genhtml_function_coverage=1 00:07:05.193 --rc genhtml_legend=1 00:07:05.193 --rc geninfo_all_blocks=1 00:07:05.193 --rc geninfo_unexecuted_blocks=1 00:07:05.193 00:07:05.193 ' 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:05.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.193 --rc genhtml_branch_coverage=1 00:07:05.193 --rc genhtml_function_coverage=1 00:07:05.193 --rc genhtml_legend=1 00:07:05.193 --rc geninfo_all_blocks=1 00:07:05.193 --rc geninfo_unexecuted_blocks=1 00:07:05.193 00:07:05.193 ' 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:05.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.193 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:05.194 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:05.194 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:05.194 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.194 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.194 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.194 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:05.194 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:05.194 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:05.194 09:18:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:10.461 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.461 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:10.461 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:10.461 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:10.461 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:10.461 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:10.461 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:10.461 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:10.461 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:10.461 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:10.461 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:10.462 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:10.462 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:10.462 Found net devices under 0000:af:00.0: cvl_0_0 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:10.462 Found net devices under 0000:af:00.1: cvl_0_1 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.462 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.721 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.721 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.721 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:10.721 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.721 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.721 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.721 09:18:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:10.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:07:10.721 00:07:10.721 --- 10.0.0.2 ping statistics --- 00:07:10.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.721 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:07:10.721 00:07:10.721 --- 10.0.0.1 ping statistics --- 00:07:10.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.721 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3188514 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3188514 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3188514 ']' 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.721 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:10.980 [2024-12-13 09:18:23.116160] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:10.980 [2024-12-13 09:18:23.116207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.980 [2024-12-13 09:18:23.182057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.980 [2024-12-13 09:18:23.222734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.980 [2024-12-13 09:18:23.222775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.980 [2024-12-13 09:18:23.222782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.980 [2024-12-13 09:18:23.222789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.980 [2024-12-13 09:18:23.222793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.980 [2024-12-13 09:18:23.224167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.980 [2024-12-13 09:18:23.224263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.980 [2024-12-13 09:18:23.224330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.980 [2024-12-13 09:18:23.224331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.980 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.980 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:10.980 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:10.980 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.980 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:10.980 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.980 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:10.980 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.980 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:10.980 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.980 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:10.980 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.980 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:11.239 [2024-12-13 09:18:23.376245] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:11.239 Malloc0 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:11.239 [2024-12-13 09:18:23.431477] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3188537 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3188539 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:11.239 { 00:07:11.239 "params": { 00:07:11.239 "name": "Nvme$subsystem", 00:07:11.239 "trtype": "$TEST_TRANSPORT", 00:07:11.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:11.239 "adrfam": "ipv4", 00:07:11.239 "trsvcid": "$NVMF_PORT", 00:07:11.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:11.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:11.239 "hdgst": ${hdgst:-false}, 00:07:11.239 "ddgst": ${ddgst:-false} 00:07:11.239 }, 00:07:11.239 "method": "bdev_nvme_attach_controller" 00:07:11.239 } 00:07:11.239 EOF 00:07:11.239 )") 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3188541 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:11.239 { 00:07:11.239 "params": { 00:07:11.239 "name": "Nvme$subsystem", 00:07:11.239 "trtype": "$TEST_TRANSPORT", 00:07:11.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:11.239 "adrfam": "ipv4", 00:07:11.239 "trsvcid": "$NVMF_PORT", 00:07:11.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:11.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:11.239 "hdgst": ${hdgst:-false}, 00:07:11.239 "ddgst": ${ddgst:-false} 00:07:11.239 }, 00:07:11.239 "method": "bdev_nvme_attach_controller" 00:07:11.239 } 00:07:11.239 EOF 00:07:11.239 )") 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3188544 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:11.239 { 00:07:11.239 "params": { 00:07:11.239 "name": "Nvme$subsystem", 00:07:11.239 "trtype": "$TEST_TRANSPORT", 00:07:11.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:11.239 "adrfam": "ipv4", 00:07:11.239 "trsvcid": "$NVMF_PORT", 00:07:11.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:11.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:11.239 "hdgst": ${hdgst:-false}, 00:07:11.239 "ddgst": ${ddgst:-false} 00:07:11.239 }, 00:07:11.239 "method": "bdev_nvme_attach_controller" 00:07:11.239 } 00:07:11.239 EOF 00:07:11.239 )") 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:11.239 { 00:07:11.239 "params": { 00:07:11.239 "name": "Nvme$subsystem", 00:07:11.239 "trtype": "$TEST_TRANSPORT", 00:07:11.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:11.239 "adrfam": "ipv4", 00:07:11.239 "trsvcid": "$NVMF_PORT", 00:07:11.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:11.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:11.239 "hdgst": ${hdgst:-false}, 00:07:11.239 "ddgst": ${ddgst:-false} 00:07:11.239 }, 00:07:11.239 "method": "bdev_nvme_attach_controller" 00:07:11.239 } 00:07:11.239 EOF 00:07:11.239 )") 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3188537 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:11.239 "params": { 00:07:11.239 "name": "Nvme1", 00:07:11.239 "trtype": "tcp", 00:07:11.239 "traddr": "10.0.0.2", 00:07:11.239 "adrfam": "ipv4", 00:07:11.239 "trsvcid": "4420", 00:07:11.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:11.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:11.239 "hdgst": false, 00:07:11.239 "ddgst": false 00:07:11.239 }, 00:07:11.239 "method": "bdev_nvme_attach_controller" 00:07:11.239 }' 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:11.239 "params": { 00:07:11.239 "name": "Nvme1", 00:07:11.239 "trtype": "tcp", 00:07:11.239 "traddr": "10.0.0.2", 00:07:11.239 "adrfam": "ipv4", 00:07:11.239 "trsvcid": "4420", 00:07:11.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:11.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:11.239 "hdgst": false, 00:07:11.239 "ddgst": false 00:07:11.239 }, 00:07:11.239 "method": "bdev_nvme_attach_controller" 00:07:11.239 }' 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:11.239 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:11.239 "params": { 00:07:11.240 "name": "Nvme1", 00:07:11.240 "trtype": "tcp", 00:07:11.240 "traddr": "10.0.0.2", 00:07:11.240 "adrfam": "ipv4", 00:07:11.240 "trsvcid": "4420", 00:07:11.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:11.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:11.240 "hdgst": false, 00:07:11.240 "ddgst": false 00:07:11.240 }, 00:07:11.240 "method": "bdev_nvme_attach_controller" 00:07:11.240 }' 00:07:11.240 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:11.240 09:18:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:11.240 "params": { 00:07:11.240 "name": "Nvme1", 00:07:11.240 "trtype": "tcp", 00:07:11.240 "traddr": "10.0.0.2", 00:07:11.240 "adrfam": "ipv4", 00:07:11.240 "trsvcid": "4420", 00:07:11.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:11.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:11.240 "hdgst": false, 00:07:11.240 "ddgst": false 00:07:11.240 }, 00:07:11.240 "method": "bdev_nvme_attach_controller" 00:07:11.240 }' 00:07:11.240 [2024-12-13 09:18:23.480772] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:11.240 [2024-12-13 09:18:23.480820] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:11.240 [2024-12-13 09:18:23.484326] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:11.240 [2024-12-13 09:18:23.484362] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:11.240 [2024-12-13 09:18:23.485898] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:11.240 [2024-12-13 09:18:23.485936] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:11.240 [2024-12-13 09:18:23.485966] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:11.240 [2024-12-13 09:18:23.486009] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:11.498 [2024-12-13 09:18:23.661362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.498 [2024-12-13 09:18:23.707122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:11.498 [2024-12-13 09:18:23.752389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.498 [2024-12-13 09:18:23.797509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:11.498 [2024-12-13 09:18:23.852818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.755 [2024-12-13 09:18:23.910931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:11.755 [2024-12-13 09:18:23.914134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.755 [2024-12-13 09:18:23.956097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:11.755 Running I/O for 1 seconds... 00:07:11.755 Running I/O for 1 seconds... 00:07:11.756 Running I/O for 1 seconds... 00:07:12.013 Running I/O for 1 seconds... 00:07:12.946 7227.00 IOPS, 28.23 MiB/s 00:07:12.946 Latency(us) 00:07:12.946 [2024-12-13T08:18:25.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.946 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:12.946 Nvme1n1 : 1.02 7247.26 28.31 0.00 0.00 17500.03 6678.43 26963.38 00:07:12.946 [2024-12-13T08:18:25.312Z] =================================================================================================================== 00:07:12.946 [2024-12-13T08:18:25.312Z] Total : 7247.26 28.31 0.00 0.00 17500.03 6678.43 26963.38 00:07:12.946 7059.00 IOPS, 27.57 MiB/s 00:07:12.946 Latency(us) 00:07:12.946 [2024-12-13T08:18:25.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.946 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:12.946 Nvme1n1 : 1.01 7162.72 27.98 0.00 0.00 17824.00 4119.41 29459.99 00:07:12.946 [2024-12-13T08:18:25.312Z] =================================================================================================================== 00:07:12.946 [2024-12-13T08:18:25.312Z] Total : 7162.72 27.98 0.00 0.00 17824.00 4119.41 29459.99 00:07:12.946 12887.00 IOPS, 50.34 MiB/s 00:07:12.946 Latency(us) 00:07:12.946 [2024-12-13T08:18:25.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.946 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:12.946 Nvme1n1 : 1.00 12962.87 50.64 0.00 0.00 9851.04 3089.55 19099.06 00:07:12.946 [2024-12-13T08:18:25.312Z] =================================================================================================================== 00:07:12.946 [2024-12-13T08:18:25.312Z] Total : 12962.87 50.64 0.00 0.00 9851.04 3089.55 19099.06 00:07:12.946 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3188539 00:07:12.946 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3188541 00:07:12.946 243504.00 IOPS, 951.19 MiB/s 00:07:12.946 Latency(us) 00:07:12.946 [2024-12-13T08:18:25.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.946 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:12.946 Nvme1n1 : 1.00 243140.00 949.77 0.00 0.00 523.50 220.40 1490.16 00:07:12.946 [2024-12-13T08:18:25.312Z] =================================================================================================================== 00:07:12.946 [2024-12-13T08:18:25.312Z] Total : 243140.00 949.77 0.00 0.00 523.50 220.40 1490.16 00:07:13.203 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3188544 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:13.204 rmmod nvme_tcp 00:07:13.204 rmmod nvme_fabrics 00:07:13.204 rmmod nvme_keyring 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3188514 ']' 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3188514 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3188514 ']' 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3188514 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3188514 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3188514' 00:07:13.204 killing process with pid 3188514 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3188514 00:07:13.204 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3188514 00:07:13.463 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:13.463 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:13.463 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:13.463 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:13.463 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:13.463 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:13.463 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:13.463 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:13.463 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:13.463 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.463 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.463 09:18:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.365 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:15.365 00:07:15.365 real 0m10.577s 00:07:15.365 user 0m16.268s 00:07:15.365 sys 0m5.864s 00:07:15.365 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.365 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:15.365 ************************************ 00:07:15.365 END TEST nvmf_bdev_io_wait 00:07:15.365 ************************************ 00:07:15.365 09:18:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:15.365 09:18:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.365 09:18:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.365 09:18:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.624 ************************************ 00:07:15.624 START TEST nvmf_queue_depth 00:07:15.624 ************************************ 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:15.624 * Looking for test storage... 00:07:15.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.624 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:15.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.625 --rc genhtml_branch_coverage=1 00:07:15.625 --rc genhtml_function_coverage=1 00:07:15.625 --rc genhtml_legend=1 00:07:15.625 --rc geninfo_all_blocks=1 00:07:15.625 --rc geninfo_unexecuted_blocks=1 00:07:15.625 00:07:15.625 ' 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:15.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.625 --rc genhtml_branch_coverage=1 00:07:15.625 --rc genhtml_function_coverage=1 00:07:15.625 --rc genhtml_legend=1 00:07:15.625 --rc geninfo_all_blocks=1 00:07:15.625 --rc geninfo_unexecuted_blocks=1 00:07:15.625 00:07:15.625 ' 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:15.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.625 --rc genhtml_branch_coverage=1 00:07:15.625 --rc genhtml_function_coverage=1 00:07:15.625 --rc genhtml_legend=1 00:07:15.625 --rc geninfo_all_blocks=1 00:07:15.625 --rc geninfo_unexecuted_blocks=1 00:07:15.625 00:07:15.625 ' 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:15.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.625 --rc genhtml_branch_coverage=1 00:07:15.625 --rc genhtml_function_coverage=1 00:07:15.625 --rc genhtml_legend=1 00:07:15.625 --rc geninfo_all_blocks=1 00:07:15.625 --rc geninfo_unexecuted_blocks=1 00:07:15.625 00:07:15.625 ' 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:15.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:15.625 09:18:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:20.892 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:20.892 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:20.892 Found net devices under 0000:af:00.0: cvl_0_0 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:20.892 Found net devices under 0000:af:00.1: cvl_0_1 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.892 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.893 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.893 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:20.893 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.893 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.893 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:20.893 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:20.893 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.893 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.893 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:20.893 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:20.893 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.893 09:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:20.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:07:20.893 00:07:20.893 --- 10.0.0.2 ping statistics --- 00:07:20.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.893 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:20.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:07:20.893 00:07:20.893 --- 10.0.0.1 ping statistics --- 00:07:20.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.893 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3192268 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3192268 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3192268 ']' 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.893 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.151 [2024-12-13 09:18:33.264009] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:21.151 [2024-12-13 09:18:33.264053] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.151 [2024-12-13 09:18:33.332463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.151 [2024-12-13 09:18:33.371815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.151 [2024-12-13 09:18:33.371854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.151 [2024-12-13 09:18:33.371861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.151 [2024-12-13 09:18:33.371867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.151 [2024-12-13 09:18:33.371872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.151 [2024-12-13 09:18:33.372370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.151 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.151 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:21.151 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:21.151 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:21.151 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.151 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.151 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:21.151 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.151 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.151 [2024-12-13 09:18:33.504379] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.151 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.151 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:21.151 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.151 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.410 Malloc0 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.410 [2024-12-13 09:18:33.546635] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3192473 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3192473 /var/tmp/bdevperf.sock 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3192473 ']' 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:21.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.410 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.410 [2024-12-13 09:18:33.598634] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:21.410 [2024-12-13 09:18:33.598678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192473 ] 00:07:21.410 [2024-12-13 09:18:33.661591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.410 [2024-12-13 09:18:33.703234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.668 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.668 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:21.668 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:21.668 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.668 09:18:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.668 NVMe0n1 00:07:21.668 09:18:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.668 09:18:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:21.926 Running I/O for 10 seconds... 00:07:23.800 11864.00 IOPS, 46.34 MiB/s [2024-12-13T08:18:37.539Z] 12131.50 IOPS, 47.39 MiB/s [2024-12-13T08:18:38.474Z] 12211.33 IOPS, 47.70 MiB/s [2024-12-13T08:18:39.408Z] 12275.00 IOPS, 47.95 MiB/s [2024-12-13T08:18:40.342Z] 12284.80 IOPS, 47.99 MiB/s [2024-12-13T08:18:41.277Z] 12331.67 IOPS, 48.17 MiB/s [2024-12-13T08:18:42.212Z] 12368.00 IOPS, 48.31 MiB/s [2024-12-13T08:18:43.291Z] 12397.38 IOPS, 48.43 MiB/s [2024-12-13T08:18:44.225Z] 12418.11 IOPS, 48.51 MiB/s [2024-12-13T08:18:44.226Z] 12467.70 IOPS, 48.70 MiB/s 00:07:31.860 Latency(us) 00:07:31.860 [2024-12-13T08:18:44.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.860 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:31.860 Verification LBA range: start 0x0 length 0x4000 00:07:31.860 NVMe0n1 : 10.06 12491.10 48.79 0.00 0.00 81726.52 18724.57 55424.73 00:07:31.860 [2024-12-13T08:18:44.226Z] =================================================================================================================== 00:07:31.860 [2024-12-13T08:18:44.226Z] Total : 12491.10 48.79 0.00 0.00 81726.52 18724.57 55424.73 00:07:31.860 { 00:07:31.860 "results": [ 00:07:31.860 { 00:07:31.860 "job": "NVMe0n1", 00:07:31.860 "core_mask": "0x1", 00:07:31.860 "workload": "verify", 00:07:31.860 "status": "finished", 00:07:31.860 "verify_range": { 00:07:31.860 "start": 0, 00:07:31.860 "length": 16384 00:07:31.860 }, 00:07:31.860 "queue_depth": 1024, 00:07:31.860 "io_size": 4096, 00:07:31.860 "runtime": 10.062682, 00:07:31.860 "iops": 12491.103266504893, 00:07:31.860 "mibps": 48.79337213478474, 00:07:31.860 "io_failed": 0, 00:07:31.860 "io_timeout": 0, 00:07:31.860 "avg_latency_us": 81726.51693305056, 00:07:31.860 "min_latency_us": 18724.571428571428, 00:07:31.860 "max_latency_us": 55424.73142857143 00:07:31.860 } 00:07:31.860 ], 00:07:31.860 "core_count": 1 00:07:31.860 } 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3192473 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3192473 ']' 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3192473 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3192473 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3192473' 00:07:32.118 killing process with pid 3192473 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3192473 00:07:32.118 Received shutdown signal, test time was about 10.000000 seconds 00:07:32.118 00:07:32.118 Latency(us) 00:07:32.118 [2024-12-13T08:18:44.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.118 [2024-12-13T08:18:44.484Z] =================================================================================================================== 00:07:32.118 [2024-12-13T08:18:44.484Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3192473 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:32.118 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:32.118 rmmod nvme_tcp 00:07:32.118 rmmod nvme_fabrics 00:07:32.118 rmmod nvme_keyring 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3192268 ']' 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3192268 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3192268 ']' 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3192268 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3192268 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3192268' 00:07:32.377 killing process with pid 3192268 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3192268 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3192268 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.377 09:18:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.908 09:18:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:34.908 00:07:34.908 real 0m19.044s 00:07:34.908 user 0m22.895s 00:07:34.908 sys 0m5.503s 00:07:34.908 09:18:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.908 09:18:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:34.908 ************************************ 00:07:34.908 END TEST nvmf_queue_depth 00:07:34.908 ************************************ 00:07:34.908 09:18:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:34.909 09:18:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:34.909 09:18:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.909 09:18:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:34.909 ************************************ 00:07:34.909 START TEST nvmf_target_multipath 00:07:34.909 ************************************ 00:07:34.909 09:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:34.909 * Looking for test storage... 00:07:34.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.909 09:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:34.909 09:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:07:34.909 09:18:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:34.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.909 --rc genhtml_branch_coverage=1 00:07:34.909 --rc genhtml_function_coverage=1 00:07:34.909 --rc genhtml_legend=1 00:07:34.909 --rc geninfo_all_blocks=1 00:07:34.909 --rc geninfo_unexecuted_blocks=1 00:07:34.909 00:07:34.909 ' 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:34.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.909 --rc genhtml_branch_coverage=1 00:07:34.909 --rc genhtml_function_coverage=1 00:07:34.909 --rc genhtml_legend=1 00:07:34.909 --rc geninfo_all_blocks=1 00:07:34.909 --rc geninfo_unexecuted_blocks=1 00:07:34.909 00:07:34.909 ' 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:34.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.909 --rc genhtml_branch_coverage=1 00:07:34.909 --rc genhtml_function_coverage=1 00:07:34.909 --rc genhtml_legend=1 00:07:34.909 --rc geninfo_all_blocks=1 00:07:34.909 --rc geninfo_unexecuted_blocks=1 00:07:34.909 00:07:34.909 ' 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:34.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.909 --rc genhtml_branch_coverage=1 00:07:34.909 --rc genhtml_function_coverage=1 00:07:34.909 --rc genhtml_legend=1 00:07:34.909 --rc geninfo_all_blocks=1 00:07:34.909 --rc geninfo_unexecuted_blocks=1 00:07:34.909 00:07:34.909 ' 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.909 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:34.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:34.910 09:18:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:40.176 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:40.177 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:40.177 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:40.177 Found net devices under 0000:af:00.0: cvl_0_0 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:40.177 Found net devices under 0000:af:00.1: cvl_0_1 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:40.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:07:40.177 00:07:40.177 --- 10.0.0.2 ping statistics --- 00:07:40.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.177 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:40.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:07:40.177 00:07:40.177 --- 10.0.0.1 ping statistics --- 00:07:40.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.177 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:40.177 only one NIC for nvmf test 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:40.177 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:40.178 rmmod nvme_tcp 00:07:40.178 rmmod nvme_fabrics 00:07:40.178 rmmod nvme_keyring 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.178 09:18:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:42.710 00:07:42.710 real 0m7.755s 00:07:42.710 user 0m1.598s 00:07:42.710 sys 0m4.082s 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:42.710 ************************************ 00:07:42.710 END TEST nvmf_target_multipath 00:07:42.710 ************************************ 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.710 ************************************ 00:07:42.710 START TEST nvmf_zcopy 00:07:42.710 ************************************ 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:42.710 * Looking for test storage... 00:07:42.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:42.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.710 --rc genhtml_branch_coverage=1 00:07:42.710 --rc genhtml_function_coverage=1 00:07:42.710 --rc genhtml_legend=1 00:07:42.710 --rc geninfo_all_blocks=1 00:07:42.710 --rc geninfo_unexecuted_blocks=1 00:07:42.710 00:07:42.710 ' 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:42.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.710 --rc genhtml_branch_coverage=1 00:07:42.710 --rc genhtml_function_coverage=1 00:07:42.710 --rc genhtml_legend=1 00:07:42.710 --rc geninfo_all_blocks=1 00:07:42.710 --rc geninfo_unexecuted_blocks=1 00:07:42.710 00:07:42.710 ' 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:42.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.710 --rc genhtml_branch_coverage=1 00:07:42.710 --rc genhtml_function_coverage=1 00:07:42.710 --rc genhtml_legend=1 00:07:42.710 --rc geninfo_all_blocks=1 00:07:42.710 --rc geninfo_unexecuted_blocks=1 00:07:42.710 00:07:42.710 ' 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:42.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.710 --rc genhtml_branch_coverage=1 00:07:42.710 --rc genhtml_function_coverage=1 00:07:42.710 --rc genhtml_legend=1 00:07:42.710 --rc geninfo_all_blocks=1 00:07:42.710 --rc geninfo_unexecuted_blocks=1 00:07:42.710 00:07:42.710 ' 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.710 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:07:42.711 09:18:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:47.981 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:47.981 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:47.981 Found net devices under 0000:af:00.0: cvl_0_0 00:07:47.981 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:47.982 Found net devices under 0000:af:00.1: cvl_0_1 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:47.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:07:47.982 00:07:47.982 --- 10.0.0.2 ping statistics --- 00:07:47.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.982 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:07:47.982 00:07:47.982 --- 10.0.0.1 ping statistics --- 00:07:47.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.982 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:47.982 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:48.242 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:07:48.242 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:48.242 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.242 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:48.242 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3201012 00:07:48.242 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3201012 00:07:48.242 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:48.242 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3201012 ']' 00:07:48.242 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.242 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.242 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.242 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.242 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:48.242 [2024-12-13 09:19:00.411649] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:48.242 [2024-12-13 09:19:00.411697] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.242 [2024-12-13 09:19:00.486949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.242 [2024-12-13 09:19:00.538648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.242 [2024-12-13 09:19:00.538694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.242 [2024-12-13 09:19:00.538707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.242 [2024-12-13 09:19:00.538717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.242 [2024-12-13 09:19:00.538726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.242 [2024-12-13 09:19:00.539400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:48.501 [2024-12-13 09:19:00.687201] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:48.501 [2024-12-13 09:19:00.703365] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:48.501 malloc0 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:48.501 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:48.501 { 00:07:48.501 "params": { 00:07:48.501 "name": "Nvme$subsystem", 00:07:48.501 "trtype": "$TEST_TRANSPORT", 00:07:48.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.501 "adrfam": "ipv4", 00:07:48.501 "trsvcid": "$NVMF_PORT", 00:07:48.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.502 "hdgst": ${hdgst:-false}, 00:07:48.502 "ddgst": ${ddgst:-false} 00:07:48.502 }, 00:07:48.502 "method": "bdev_nvme_attach_controller" 00:07:48.502 } 00:07:48.502 EOF 00:07:48.502 )") 00:07:48.502 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:07:48.502 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:07:48.502 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:07:48.502 09:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:48.502 "params": { 00:07:48.502 "name": "Nvme1", 00:07:48.502 "trtype": "tcp", 00:07:48.502 "traddr": "10.0.0.2", 00:07:48.502 "adrfam": "ipv4", 00:07:48.502 "trsvcid": "4420", 00:07:48.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.502 "hdgst": false, 00:07:48.502 "ddgst": false 00:07:48.502 }, 00:07:48.502 "method": "bdev_nvme_attach_controller" 00:07:48.502 }' 00:07:48.502 [2024-12-13 09:19:00.783466] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:48.502 [2024-12-13 09:19:00.783507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201223 ] 00:07:48.502 [2024-12-13 09:19:00.846567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.760 [2024-12-13 09:19:00.887624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.760 Running I/O for 10 seconds... 00:07:51.069 8764.00 IOPS, 68.47 MiB/s [2024-12-13T08:19:04.369Z] 8853.50 IOPS, 69.17 MiB/s [2024-12-13T08:19:05.305Z] 8888.67 IOPS, 69.44 MiB/s [2024-12-13T08:19:06.239Z] 8899.75 IOPS, 69.53 MiB/s [2024-12-13T08:19:07.172Z] 8906.00 IOPS, 69.58 MiB/s [2024-12-13T08:19:08.546Z] 8910.17 IOPS, 69.61 MiB/s [2024-12-13T08:19:09.481Z] 8905.14 IOPS, 69.57 MiB/s [2024-12-13T08:19:10.417Z] 8915.25 IOPS, 69.65 MiB/s [2024-12-13T08:19:11.352Z] 8909.22 IOPS, 69.60 MiB/s [2024-12-13T08:19:11.352Z] 8898.30 IOPS, 69.52 MiB/s 00:07:58.986 Latency(us) 00:07:58.986 [2024-12-13T08:19:11.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.986 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:07:58.986 Verification LBA range: start 0x0 length 0x1000 00:07:58.986 Nvme1n1 : 10.01 8900.12 69.53 0.00 0.00 14340.29 1607.19 22719.15 00:07:58.986 [2024-12-13T08:19:11.352Z] =================================================================================================================== 00:07:58.986 [2024-12-13T08:19:11.352Z] Total : 8900.12 69.53 0.00 0.00 14340.29 1607.19 22719.15 00:07:58.986 09:19:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:07:58.986 09:19:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:07:58.986 09:19:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3202826 00:07:58.986 09:19:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:07:58.986 09:19:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:07:58.986 09:19:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:58.986 09:19:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:07:58.986 09:19:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:58.986 { 00:07:58.986 "params": { 00:07:58.986 "name": "Nvme$subsystem", 00:07:58.986 "trtype": "$TEST_TRANSPORT", 00:07:58.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:58.986 "adrfam": "ipv4", 00:07:58.986 "trsvcid": "$NVMF_PORT", 00:07:58.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:58.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:58.986 "hdgst": ${hdgst:-false}, 00:07:58.986 "ddgst": ${ddgst:-false} 00:07:58.986 }, 00:07:58.986 "method": "bdev_nvme_attach_controller" 00:07:58.986 } 00:07:58.986 EOF 00:07:58.986 )") 00:07:58.986 09:19:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:58.986 09:19:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:07:58.986 [2024-12-13 09:19:11.292708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.986 [2024-12-13 09:19:11.292741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.986 09:19:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:07:58.986 09:19:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:07:58.986 09:19:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:58.986 "params": { 00:07:58.986 "name": "Nvme1", 00:07:58.986 "trtype": "tcp", 00:07:58.986 "traddr": "10.0.0.2", 00:07:58.986 "adrfam": "ipv4", 00:07:58.986 "trsvcid": "4420", 00:07:58.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:58.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:58.986 "hdgst": false, 00:07:58.986 "ddgst": false 00:07:58.986 }, 00:07:58.986 "method": "bdev_nvme_attach_controller" 00:07:58.986 }' 00:07:58.986 [2024-12-13 09:19:11.300690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.986 [2024-12-13 09:19:11.300705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.986 [2024-12-13 09:19:11.308709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.986 [2024-12-13 09:19:11.308720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.986 [2024-12-13 09:19:11.316472] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:07:58.986 [2024-12-13 09:19:11.316514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3202826 ] 00:07:58.986 [2024-12-13 09:19:11.316728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.986 [2024-12-13 09:19:11.316738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.986 [2024-12-13 09:19:11.324748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.986 [2024-12-13 09:19:11.324759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.986 [2024-12-13 09:19:11.336778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.986 [2024-12-13 09:19:11.336788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.986 [2024-12-13 09:19:11.344800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.986 [2024-12-13 09:19:11.344812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.352823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.352833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.360844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.360854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.368865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.368876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.374273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.245 [2024-12-13 09:19:11.376886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.376897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.384909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.384921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.392929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.392944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.400952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.400963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.408973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.408984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.416591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.245 [2024-12-13 09:19:11.416995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.417011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.425016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.425027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.433043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.433062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.441072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.441091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.449088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.449099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.461119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.461131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.469137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.469148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.477161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.477172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.485180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.485191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.493198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.493208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.501220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.501230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.509261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.509283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.517276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.517290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.525294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.525308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.533313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.533327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.541333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.541345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.549354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.549364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.557376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.557386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.565399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.565410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.573422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.573440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.581444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.581463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.589474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.589489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.597490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.597501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.245 [2024-12-13 09:19:11.605519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.245 [2024-12-13 09:19:11.605539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.613537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.613549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 Running I/O for 5 seconds... 00:07:59.504 [2024-12-13 09:19:11.621555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.621565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.632706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.632727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.641707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.641727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.650920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.650939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.660574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.660594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.669100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.669120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.678127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.678145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.687608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.687626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.697277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.697296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.705793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.705812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.714178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.714196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.722596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.722616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.731584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.731603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.740467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.740489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.749529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.749548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.758403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.758421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.767552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.767570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.777110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.777128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.786603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.786622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.795014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.795033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.804144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.804163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.812680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.812698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.821133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.821152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.830045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.830063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.838519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.504 [2024-12-13 09:19:11.838538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.504 [2024-12-13 09:19:11.847017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.505 [2024-12-13 09:19:11.847035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.505 [2024-12-13 09:19:11.856214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.505 [2024-12-13 09:19:11.856232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.505 [2024-12-13 09:19:11.864716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.505 [2024-12-13 09:19:11.864735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:11.874018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:11.874038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:11.883005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:11.883023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:11.892554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:11.892573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:11.901983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:11.902001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:11.910421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:11.910444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:11.919403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:11.919421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:11.928501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:11.928519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:11.937578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:11.937597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:11.946459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:11.946478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:11.955558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:11.955577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:11.964456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:11.964475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:11.973658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:11.973676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:11.983192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:11.983211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:11.991665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:11.991683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:12.001129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:12.001147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:12.009673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:12.009692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:12.018524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:12.018544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:12.027031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:12.027050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:12.035871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:12.035890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:12.044912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:12.044930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:12.054008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:12.054026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:12.062956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:12.062974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:12.071978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:12.071997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:12.080898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:12.080918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:12.089357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:12.089376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:12.098987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:12.099008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:12.107558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:12.107577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:12.116295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:12.116321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.764 [2024-12-13 09:19:12.125349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.764 [2024-12-13 09:19:12.125367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.134596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.134615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.143746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.143765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.152699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.152718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.161689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.161708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.170631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.170649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.179650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.179668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.188634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.188652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.197818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.197836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.207211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.207230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.215729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.215748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.225208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.225226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.233553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.233571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.242106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.242125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.250505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.250524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.259652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.259670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.268610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.268628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.277084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.277102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.286070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.286088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.294959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.294978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.304001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.304020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.313594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.313613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.322105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.322124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.331108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.331126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.339699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.339718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.348586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.348605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.357611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.357630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.023 [2024-12-13 09:19:12.366560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.023 [2024-12-13 09:19:12.366578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.024 [2024-12-13 09:19:12.375401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.024 [2024-12-13 09:19:12.375419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.024 [2024-12-13 09:19:12.384548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.024 [2024-12-13 09:19:12.384566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.393711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.393734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.402810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.402829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.411181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.411200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.420815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.420834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.429896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.429914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.438858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.438876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.447964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.447982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.456989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.457008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.465803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.465821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.474689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.474708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.484177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.484196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.493275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.493293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.502284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.502302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.511770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.511789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.520322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.520340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.529088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.529107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.537985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.538003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.546919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.546938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.556483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.556502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.565482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.565501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.282 [2024-12-13 09:19:12.574068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.282 [2024-12-13 09:19:12.574087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.283 [2024-12-13 09:19:12.583025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.283 [2024-12-13 09:19:12.583047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.283 [2024-12-13 09:19:12.592024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.283 [2024-12-13 09:19:12.592042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.283 [2024-12-13 09:19:12.600910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.283 [2024-12-13 09:19:12.600929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.283 [2024-12-13 09:19:12.609964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.283 [2024-12-13 09:19:12.609983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.283 [2024-12-13 09:19:12.618764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.283 [2024-12-13 09:19:12.618783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.283 17088.00 IOPS, 133.50 MiB/s [2024-12-13T08:19:12.649Z] [2024-12-13 09:19:12.627690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.283 [2024-12-13 09:19:12.627708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.283 [2024-12-13 09:19:12.636659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.283 [2024-12-13 09:19:12.636681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.283 [2024-12-13 09:19:12.645920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.283 [2024-12-13 09:19:12.645940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.655391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.655410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.664420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.664440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.673562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.673581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.682813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.682831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.691629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.691648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.700434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.700460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.709470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.709490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.719014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.719033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.727464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.727483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.736598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.736617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.746006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.746025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.754502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.754524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.763598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.763617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.771964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.771983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.781318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.781337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.789930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.789949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.799012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.799030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.808414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.808434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.816832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.816851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.825729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.825748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.541 [2024-12-13 09:19:12.834760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.541 [2024-12-13 09:19:12.834779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.542 [2024-12-13 09:19:12.843635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.542 [2024-12-13 09:19:12.843654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.542 [2024-12-13 09:19:12.852615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.542 [2024-12-13 09:19:12.852633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.542 [2024-12-13 09:19:12.862197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.542 [2024-12-13 09:19:12.862216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.542 [2024-12-13 09:19:12.870561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.542 [2024-12-13 09:19:12.870581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.542 [2024-12-13 09:19:12.879612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.542 [2024-12-13 09:19:12.879630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.542 [2024-12-13 09:19:12.888556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.542 [2024-12-13 09:19:12.888575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.542 [2024-12-13 09:19:12.897970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.542 [2024-12-13 09:19:12.897989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.542 [2024-12-13 09:19:12.906642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.542 [2024-12-13 09:19:12.906662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:12.916196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:12.916215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:12.924913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:12.924936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:12.934431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:12.934458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:12.943486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:12.943506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:12.953058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:12.953078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:12.961627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:12.961647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:12.971058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:12.971076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:12.980139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:12.980157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:12.988595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:12.988614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:12.997111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:12.997130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.006077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.006095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.014910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.014928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.024117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.024135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.032715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.032733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.041760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.041779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.050837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.050856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.059905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.059923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.068941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.068960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.077960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.077980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.087032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.087050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.095951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.095969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.104873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.104892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.113830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.113848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.123001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.123021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.132329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.132349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.141792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.141811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.150250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.150268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.800 [2024-12-13 09:19:13.159129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.800 [2024-12-13 09:19:13.159148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.168232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.168250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.177351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.177369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.186430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.186454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.195384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.195403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.204815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.204834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.213865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.213884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.222866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.222885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.231862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.231881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.240770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.240788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.249611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.249630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.259128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.259147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.267486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.267504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.276599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.276617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.285441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.285465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.294805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.294824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.304005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.304024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.058 [2024-12-13 09:19:13.312904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.058 [2024-12-13 09:19:13.312922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.059 [2024-12-13 09:19:13.322351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.059 [2024-12-13 09:19:13.322370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.059 [2024-12-13 09:19:13.331414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.059 [2024-12-13 09:19:13.331433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.059 [2024-12-13 09:19:13.339740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.059 [2024-12-13 09:19:13.339759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.059 [2024-12-13 09:19:13.348796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.059 [2024-12-13 09:19:13.348814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.059 [2024-12-13 09:19:13.358006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.059 [2024-12-13 09:19:13.358024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.059 [2024-12-13 09:19:13.367002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.059 [2024-12-13 09:19:13.367020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.059 [2024-12-13 09:19:13.376640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.059 [2024-12-13 09:19:13.376660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.059 [2024-12-13 09:19:13.385852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.059 [2024-12-13 09:19:13.385870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.059 [2024-12-13 09:19:13.394432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.059 [2024-12-13 09:19:13.394456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.059 [2024-12-13 09:19:13.403614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.059 [2024-12-13 09:19:13.403632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.059 [2024-12-13 09:19:13.412098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.059 [2024-12-13 09:19:13.412116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.059 [2024-12-13 09:19:13.421257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.059 [2024-12-13 09:19:13.421276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.430347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.430365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.439506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.439525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.448470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.448489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.457585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.457603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.466013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.466031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.475131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.475149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.484616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.484635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.493822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.493841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.502822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.502840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.512301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.512318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.520693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.520711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.529028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.529046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.538022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.538041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.547259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.547278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.556135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.556153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.565282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.565301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.574403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.574422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.583509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.583528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.592441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.592464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.601433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.601458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.610559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.610578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.619576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.619595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 17222.50 IOPS, 134.55 MiB/s [2024-12-13T08:19:13.683Z] [2024-12-13 09:19:13.628354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.628373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.637354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.637372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.646985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.647003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.655320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.655338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.664274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.664292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.673157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.673175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.317 [2024-12-13 09:19:13.682136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.317 [2024-12-13 09:19:13.682155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.576 [2024-12-13 09:19:13.691082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.576 [2024-12-13 09:19:13.691100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.576 [2024-12-13 09:19:13.700619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.576 [2024-12-13 09:19:13.700638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.576 [2024-12-13 09:19:13.708954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.576 [2024-12-13 09:19:13.708973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.576 [2024-12-13 09:19:13.718493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.576 [2024-12-13 09:19:13.718512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.576 [2024-12-13 09:19:13.728033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.728052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.736414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.736432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.745391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.745409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.754457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.754476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.763298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.763316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.772225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.772247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.781757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.781776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.790265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.790284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.799321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.799340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.808280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.808299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.817234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.817253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.826692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.826710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.835120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.835139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.844063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.844082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.852998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.853017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.861911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.861930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.870811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.870830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.880367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.880386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.888938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.888957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.898049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.898068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.907107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.907126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.916640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.916659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.925327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.925345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.577 [2024-12-13 09:19:13.934104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.577 [2024-12-13 09:19:13.934122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:13.943290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:13.943313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:13.951868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:13.951887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:13.960926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:13.960944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:13.969988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:13.970007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:13.979096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:13.979116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:13.988221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:13.988240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:13.997303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:13.997325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.006305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.006325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.015355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.015374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.025043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.025062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.034816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.034835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.043788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.043807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.052626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.052646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.062210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.062229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.071301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.071321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.080805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.080824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.089239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.089257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.098348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.098366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.107519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.107538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.116386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.116410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.125434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.125462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.134408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.134427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.143043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.143062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.152575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.152596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.161594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.161613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.171014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.171034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.180128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.180147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.189152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.189172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:01.836 [2024-12-13 09:19:14.198007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:01.836 [2024-12-13 09:19:14.198026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.206975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.206994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.216006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.216025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.224979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.224997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.234332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.234351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.242701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.242719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.251891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.251910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.260750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.260769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.269134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.269152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.278468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.278489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.287312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.287332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.296699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.296717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.305602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.305621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.315057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.315075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.323367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.323386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.332275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.332294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.341078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.341096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.349967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.349986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.356802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.356821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.367532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.367551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.377279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.377297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.385857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.095 [2024-12-13 09:19:14.385875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.095 [2024-12-13 09:19:14.394796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.096 [2024-12-13 09:19:14.394814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.096 [2024-12-13 09:19:14.404095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.096 [2024-12-13 09:19:14.404113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.096 [2024-12-13 09:19:14.413415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.096 [2024-12-13 09:19:14.413433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.096 [2024-12-13 09:19:14.422372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.096 [2024-12-13 09:19:14.422391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.096 [2024-12-13 09:19:14.430825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.096 [2024-12-13 09:19:14.430843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.096 [2024-12-13 09:19:14.439731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.096 [2024-12-13 09:19:14.439750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.096 [2024-12-13 09:19:14.449175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.096 [2024-12-13 09:19:14.449195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.096 [2024-12-13 09:19:14.457703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.096 [2024-12-13 09:19:14.457722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.381 [2024-12-13 09:19:14.466960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.466979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.476085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.476104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.484536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.484555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.493561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.493579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.502037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.502055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.510904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.510923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.519282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.519300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.528027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.528045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.537012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.537030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.546187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.546206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.555337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.555355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.564302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.564321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.573361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.573379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.582377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.582396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.591549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.591567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.600702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.600720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.609690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.609708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.618533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.618552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 17251.67 IOPS, 134.78 MiB/s [2024-12-13T08:19:14.748Z] [2024-12-13 09:19:14.627179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.627197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.635569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.635587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.644991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.645009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.653650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.653669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.663409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.663427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.672343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.672361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.681840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.681859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.690300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.690319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.699332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.699350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.708159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.708177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.717210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.717228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.726267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.726286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.735441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.735466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.382 [2024-12-13 09:19:14.743812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.382 [2024-12-13 09:19:14.743831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.640 [2024-12-13 09:19:14.753155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.640 [2024-12-13 09:19:14.753174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.640 [2024-12-13 09:19:14.762156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.640 [2024-12-13 09:19:14.762174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.640 [2024-12-13 09:19:14.771315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.640 [2024-12-13 09:19:14.771333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.640 [2024-12-13 09:19:14.781187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.640 [2024-12-13 09:19:14.781205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.640 [2024-12-13 09:19:14.789568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.640 [2024-12-13 09:19:14.789593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.798649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.798668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.807724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.807743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.816746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.816765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.823614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.823632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.834092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.834112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.842780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.842798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.852335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.852354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.861352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.861370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.870493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.870512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.878813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.878832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.887978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.887997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.897139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.897157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.906007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.906025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.914888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.914906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.924376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.924395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.933528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.933546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.943148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.943167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.952461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.952480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.961988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.962011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.971063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.971081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.979999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.980017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.988837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.988855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:14.997648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:14.997667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.641 [2024-12-13 09:19:15.006577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.641 [2024-12-13 09:19:15.006595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.015585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.015603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.024704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.024723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.034159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.034177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.043119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.043137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.052140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.052158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.060901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.060920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.069644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.069662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.078637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.078655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.087802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.087820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.096721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.096740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.106150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.106169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.115160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.115178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.124405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.124424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.132740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.132762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.141647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.141667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.150625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.150644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.159678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.159696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.169107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.169126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.178696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.178716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.188134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.188154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.197386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.197405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.206150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.206169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.214596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.214614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.223595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.223613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.232520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.232538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.241987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.242005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.251072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.251090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:02.898 [2024-12-13 09:19:15.260180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:02.898 [2024-12-13 09:19:15.260199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.269228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.269246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.278655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.278673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.287025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.287043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.295456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.295475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.304484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.304506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.313646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.313665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.322529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.322548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.331458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.331477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.340465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.340485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.348885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.348904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.358135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.358154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.367190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.367209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.376535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.376555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.385636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.385655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.394579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.394598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.404177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.404197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.413291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.413311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.422659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.422679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.431596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.431616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.440517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.440535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.449455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.449474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.458278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.458297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.466733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.466752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.473671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.473690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.484337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.484357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.492721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.156 [2024-12-13 09:19:15.492740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.156 [2024-12-13 09:19:15.501737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.157 [2024-12-13 09:19:15.501755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.157 [2024-12-13 09:19:15.510696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.157 [2024-12-13 09:19:15.510715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.157 [2024-12-13 09:19:15.520155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.157 [2024-12-13 09:19:15.520175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.528697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.528717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.537688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.537707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.546725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.546744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.555752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.555771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.564550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.564569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.573480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.573500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.582818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.582837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.592106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.592125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.601635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.601655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.610540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.610560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.619575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.619594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.628561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.628580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 17266.75 IOPS, 134.90 MiB/s [2024-12-13T08:19:15.781Z] [2024-12-13 09:19:15.637032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.637050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.646056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.646076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.655489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.655508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.665111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.665131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.682929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.682949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.691698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.691717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.701097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.701115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.709632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.709650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.718639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.718657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.727911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.727930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.737011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.737029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.746069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.746089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.754537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.754556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.763552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.763570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.772063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.772081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.415 [2024-12-13 09:19:15.781140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.415 [2024-12-13 09:19:15.781159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.790247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.790266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.799512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.799530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.808457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.808476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.816785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.816807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.825336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.825355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.834233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.834252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.843240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.843258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.852362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.852380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.861328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.861346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.870524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.870543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.880002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.880021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.888553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.888571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.896920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.896939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.905803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.905822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.915276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.915295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.924327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.924346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.933230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.673 [2024-12-13 09:19:15.933249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.673 [2024-12-13 09:19:15.942900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.674 [2024-12-13 09:19:15.942918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.674 [2024-12-13 09:19:15.951372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.674 [2024-12-13 09:19:15.951390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.674 [2024-12-13 09:19:15.960303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.674 [2024-12-13 09:19:15.960321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.674 [2024-12-13 09:19:15.969211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.674 [2024-12-13 09:19:15.969229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.674 [2024-12-13 09:19:15.978242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.674 [2024-12-13 09:19:15.978260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.674 [2024-12-13 09:19:15.986570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.674 [2024-12-13 09:19:15.986592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.674 [2024-12-13 09:19:15.995569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.674 [2024-12-13 09:19:15.995587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.674 [2024-12-13 09:19:16.004533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.674 [2024-12-13 09:19:16.004551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.674 [2024-12-13 09:19:16.014255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.674 [2024-12-13 09:19:16.014274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.674 [2024-12-13 09:19:16.022884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.674 [2024-12-13 09:19:16.022902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.674 [2024-12-13 09:19:16.032379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.674 [2024-12-13 09:19:16.032398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.041054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.041072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.050050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.050068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.059162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.059181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.067539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.067558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.076469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.076487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.084932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.084950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.094730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.094749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.103865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.103883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.112802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.112821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.121799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.121818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.130231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.130249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.139785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.139803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.148551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.148570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.158193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.158215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.167816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.167835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.177108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.177127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.186007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.186026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.194848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.194867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.203667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.932 [2024-12-13 09:19:16.203686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.932 [2024-12-13 09:19:16.213052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.933 [2024-12-13 09:19:16.213072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.933 [2024-12-13 09:19:16.222576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.933 [2024-12-13 09:19:16.222595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.933 [2024-12-13 09:19:16.231137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.933 [2024-12-13 09:19:16.231156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.933 [2024-12-13 09:19:16.240302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.933 [2024-12-13 09:19:16.240322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.933 [2024-12-13 09:19:16.249249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.933 [2024-12-13 09:19:16.249268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.933 [2024-12-13 09:19:16.257752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.933 [2024-12-13 09:19:16.257770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.933 [2024-12-13 09:19:16.266047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.933 [2024-12-13 09:19:16.266065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.933 [2024-12-13 09:19:16.274893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.933 [2024-12-13 09:19:16.274912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.933 [2024-12-13 09:19:16.283915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.933 [2024-12-13 09:19:16.283933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.933 [2024-12-13 09:19:16.292935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:03.933 [2024-12-13 09:19:16.292953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.301311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.301330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.310490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.310509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.319508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.319526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.327903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.327925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.336884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.336902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.345738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.345757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.354784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.354803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.364471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.364489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.373571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.373589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.382736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.382754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.391648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.391667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.400669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.400688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.409641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.409659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.418638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.418656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.428036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.428055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.436465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.436483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.445924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.445943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.455089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.455108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.464377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.464395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.473226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.473244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.482384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.482403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.490734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.490752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.500121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.500143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.508664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.508683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.517564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.517583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.526355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.526374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.535757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.535776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.544257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.544276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.191 [2024-12-13 09:19:16.553456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.191 [2024-12-13 09:19:16.553475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.449 [2024-12-13 09:19:16.562017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.562036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.571711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.571730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.580071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.580090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.589455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.589473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.597907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.597925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.607428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.607446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.615928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.615947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.625602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.625620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 17282.00 IOPS, 135.02 MiB/s [2024-12-13T08:19:16.816Z] [2024-12-13 09:19:16.633592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.633610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 00:08:04.450 Latency(us) 00:08:04.450 [2024-12-13T08:19:16.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.450 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:04.450 Nvme1n1 : 5.01 17282.65 135.02 0.00 0.00 7399.04 2543.42 16352.79 00:08:04.450 [2024-12-13T08:19:16.816Z] =================================================================================================================== 00:08:04.450 [2024-12-13T08:19:16.816Z] Total : 17282.65 135.02 0.00 0.00 7399.04 2543.42 16352.79 00:08:04.450 [2024-12-13 09:19:16.640192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.640210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.648228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.648242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.656246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.656258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.664270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.664285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.672299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.672314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.680313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.680324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.692351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.692365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.704377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.704392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.716413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.716430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.728446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.728468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.740483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.740497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.748497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.748508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.756513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.756525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.764532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.764544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.772559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.772569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.780579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.780590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.788597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.788607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 [2024-12-13 09:19:16.796617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.450 [2024-12-13 09:19:16.796627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3202826) - No such process 00:08:04.450 09:19:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3202826 00:08:04.450 09:19:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.450 09:19:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.450 09:19:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.450 09:19:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.450 09:19:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:04.450 09:19:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.450 09:19:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.712 delay0 00:08:04.712 09:19:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.712 09:19:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:04.712 09:19:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.712 09:19:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.712 09:19:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.712 09:19:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:04.712 [2024-12-13 09:19:16.936971] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:11.275 [2024-12-13 09:19:23.163411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c1940 is same with the state(6) to be set 00:08:11.275 Initializing NVMe Controllers 00:08:11.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:11.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:11.275 Initialization complete. Launching workers. 00:08:11.275 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 825 00:08:11.275 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1103, failed to submit 42 00:08:11.275 success 897, unsuccessful 206, failed 0 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.275 rmmod nvme_tcp 00:08:11.275 rmmod nvme_fabrics 00:08:11.275 rmmod nvme_keyring 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3201012 ']' 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3201012 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3201012 ']' 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3201012 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3201012 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3201012' 00:08:11.275 killing process with pid 3201012 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3201012 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3201012 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.275 09:19:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.176 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:13.176 00:08:13.176 real 0m30.815s 00:08:13.176 user 0m41.953s 00:08:13.176 sys 0m11.034s 00:08:13.176 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.176 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.176 ************************************ 00:08:13.176 END TEST nvmf_zcopy 00:08:13.176 ************************************ 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.435 ************************************ 00:08:13.435 START TEST nvmf_nmic 00:08:13.435 ************************************ 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:13.435 * Looking for test storage... 00:08:13.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:13.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.435 --rc genhtml_branch_coverage=1 00:08:13.435 --rc genhtml_function_coverage=1 00:08:13.435 --rc genhtml_legend=1 00:08:13.435 --rc geninfo_all_blocks=1 00:08:13.435 --rc geninfo_unexecuted_blocks=1 00:08:13.435 00:08:13.435 ' 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:13.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.435 --rc genhtml_branch_coverage=1 00:08:13.435 --rc genhtml_function_coverage=1 00:08:13.435 --rc genhtml_legend=1 00:08:13.435 --rc geninfo_all_blocks=1 00:08:13.435 --rc geninfo_unexecuted_blocks=1 00:08:13.435 00:08:13.435 ' 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:13.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.435 --rc genhtml_branch_coverage=1 00:08:13.435 --rc genhtml_function_coverage=1 00:08:13.435 --rc genhtml_legend=1 00:08:13.435 --rc geninfo_all_blocks=1 00:08:13.435 --rc geninfo_unexecuted_blocks=1 00:08:13.435 00:08:13.435 ' 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:13.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.435 --rc genhtml_branch_coverage=1 00:08:13.435 --rc genhtml_function_coverage=1 00:08:13.435 --rc genhtml_legend=1 00:08:13.435 --rc geninfo_all_blocks=1 00:08:13.435 --rc geninfo_unexecuted_blocks=1 00:08:13.435 00:08:13.435 ' 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.435 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:13.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:13.436 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.694 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:13.695 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:13.695 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:13.695 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.695 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.695 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.695 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:13.695 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:13.695 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:13.695 09:19:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:18.964 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:18.964 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:18.964 Found net devices under 0000:af:00.0: cvl_0_0 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:18.964 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:18.965 Found net devices under 0000:af:00.1: cvl_0_1 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:18.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:08:18.965 00:08:18.965 --- 10.0.0.2 ping statistics --- 00:08:18.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.965 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:08:18.965 00:08:18.965 --- 10.0.0.1 ping statistics --- 00:08:18.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.965 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:18.965 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:19.223 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:19.223 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:19.223 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.223 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:19.223 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3208294 00:08:19.223 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3208294 00:08:19.223 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.223 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3208294 ']' 00:08:19.223 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.223 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.223 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.223 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.223 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:19.223 [2024-12-13 09:19:31.414314] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:08:19.223 [2024-12-13 09:19:31.414357] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.223 [2024-12-13 09:19:31.480165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.223 [2024-12-13 09:19:31.520739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.223 [2024-12-13 09:19:31.520778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.223 [2024-12-13 09:19:31.520785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.223 [2024-12-13 09:19:31.520791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.223 [2024-12-13 09:19:31.520796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.223 [2024-12-13 09:19:31.522272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.223 [2024-12-13 09:19:31.522369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.223 [2024-12-13 09:19:31.522473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.223 [2024-12-13 09:19:31.522475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:19.482 [2024-12-13 09:19:31.672810] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:19.482 Malloc0 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:19.482 [2024-12-13 09:19:31.736046] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:19.482 test case1: single bdev can't be used in multiple subsystems 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:19.482 [2024-12-13 09:19:31.759950] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:19.482 [2024-12-13 09:19:31.759971] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:19.482 [2024-12-13 09:19:31.759978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.482 request: 00:08:19.482 { 00:08:19.482 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:19.482 "namespace": { 00:08:19.482 "bdev_name": "Malloc0", 00:08:19.482 "no_auto_visible": false, 00:08:19.482 "hide_metadata": false 00:08:19.482 }, 00:08:19.482 "method": "nvmf_subsystem_add_ns", 00:08:19.482 "req_id": 1 00:08:19.482 } 00:08:19.482 Got JSON-RPC error response 00:08:19.482 response: 00:08:19.482 { 00:08:19.482 "code": -32602, 00:08:19.482 "message": "Invalid parameters" 00:08:19.482 } 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:19.482 Adding namespace failed - expected result. 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:19.482 test case2: host connect to nvmf target in multiple paths 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:19.482 [2024-12-13 09:19:31.772077] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.482 09:19:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:20.854 09:19:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:21.787 09:19:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:21.787 09:19:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:21.787 09:19:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:21.787 09:19:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:21.788 09:19:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:24.314 09:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:24.314 09:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:24.314 09:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:24.314 09:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:24.314 09:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:24.314 09:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:24.314 09:19:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:24.314 [global] 00:08:24.314 thread=1 00:08:24.314 invalidate=1 00:08:24.314 rw=write 00:08:24.314 time_based=1 00:08:24.314 runtime=1 00:08:24.314 ioengine=libaio 00:08:24.314 direct=1 00:08:24.314 bs=4096 00:08:24.314 iodepth=1 00:08:24.314 norandommap=0 00:08:24.314 numjobs=1 00:08:24.314 00:08:24.314 verify_dump=1 00:08:24.314 verify_backlog=512 00:08:24.314 verify_state_save=0 00:08:24.314 do_verify=1 00:08:24.314 verify=crc32c-intel 00:08:24.314 [job0] 00:08:24.314 filename=/dev/nvme0n1 00:08:24.314 Could not set queue depth (nvme0n1) 00:08:24.314 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:24.314 fio-3.35 00:08:24.314 Starting 1 thread 00:08:25.247 00:08:25.247 job0: (groupid=0, jobs=1): err= 0: pid=3209343: Fri Dec 13 09:19:37 2024 00:08:25.247 read: IOPS=23, BW=93.2KiB/s (95.4kB/s)(96.0KiB/1030msec) 00:08:25.247 slat (nsec): min=8428, max=29255, avg=21812.96, stdev=4791.59 00:08:25.247 clat (usec): min=344, max=42071, avg=38025.83, stdev=11604.87 00:08:25.247 lat (usec): min=369, max=42101, avg=38047.64, stdev=11603.76 00:08:25.247 clat percentiles (usec): 00:08:25.247 | 1.00th=[ 347], 5.00th=[ 416], 10.00th=[40633], 20.00th=[41157], 00:08:25.247 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:08:25.247 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:25.247 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:25.247 | 99.99th=[42206] 00:08:25.247 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:08:25.247 slat (nsec): min=9443, max=38649, avg=10548.81, stdev=1981.89 00:08:25.247 clat (usec): min=116, max=434, avg=214.34, stdev=47.70 00:08:25.247 lat (usec): min=126, max=473, avg=224.89, stdev=48.03 00:08:25.247 clat percentiles (usec): 00:08:25.247 | 1.00th=[ 120], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 153], 00:08:25.247 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 241], 60.00th=[ 243], 00:08:25.247 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 245], 95.00th=[ 247], 00:08:25.247 | 99.00th=[ 249], 99.50th=[ 269], 99.90th=[ 437], 99.95th=[ 437], 00:08:25.247 | 99.99th=[ 437] 00:08:25.247 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:25.247 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:25.247 lat (usec) : 250=94.78%, 500=1.12% 00:08:25.247 lat (msec) : 50=4.10% 00:08:25.248 cpu : usr=0.19%, sys=0.58%, ctx=536, majf=0, minf=1 00:08:25.248 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:25.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.248 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:25.248 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:25.248 00:08:25.248 Run status group 0 (all jobs): 00:08:25.248 READ: bw=93.2KiB/s (95.4kB/s), 93.2KiB/s-93.2KiB/s (95.4kB/s-95.4kB/s), io=96.0KiB (98.3kB), run=1030-1030msec 00:08:25.248 WRITE: bw=1988KiB/s (2036kB/s), 1988KiB/s-1988KiB/s (2036kB/s-2036kB/s), io=2048KiB (2097kB), run=1030-1030msec 00:08:25.248 00:08:25.248 Disk stats (read/write): 00:08:25.248 nvme0n1: ios=70/512, merge=0/0, ticks=771/107, in_queue=878, util=91.38% 00:08:25.248 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:25.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:25.506 rmmod nvme_tcp 00:08:25.506 rmmod nvme_fabrics 00:08:25.506 rmmod nvme_keyring 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3208294 ']' 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3208294 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3208294 ']' 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3208294 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.506 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3208294 00:08:25.813 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.813 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.813 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3208294' 00:08:25.813 killing process with pid 3208294 00:08:25.813 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3208294 00:08:25.813 09:19:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3208294 00:08:25.813 09:19:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:25.814 09:19:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:25.814 09:19:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:25.814 09:19:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:25.814 09:19:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:25.814 09:19:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:25.814 09:19:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:25.814 09:19:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:25.814 09:19:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:25.814 09:19:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.814 09:19:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.814 09:19:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.777 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:28.035 00:08:28.035 real 0m14.556s 00:08:28.035 user 0m33.295s 00:08:28.035 sys 0m4.981s 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:28.035 ************************************ 00:08:28.035 END TEST nvmf_nmic 00:08:28.035 ************************************ 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:28.035 ************************************ 00:08:28.035 START TEST nvmf_fio_target 00:08:28.035 ************************************ 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:28.035 * Looking for test storage... 00:08:28.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:28.035 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:28.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.036 --rc genhtml_branch_coverage=1 00:08:28.036 --rc genhtml_function_coverage=1 00:08:28.036 --rc genhtml_legend=1 00:08:28.036 --rc geninfo_all_blocks=1 00:08:28.036 --rc geninfo_unexecuted_blocks=1 00:08:28.036 00:08:28.036 ' 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:28.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.036 --rc genhtml_branch_coverage=1 00:08:28.036 --rc genhtml_function_coverage=1 00:08:28.036 --rc genhtml_legend=1 00:08:28.036 --rc geninfo_all_blocks=1 00:08:28.036 --rc geninfo_unexecuted_blocks=1 00:08:28.036 00:08:28.036 ' 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:28.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.036 --rc genhtml_branch_coverage=1 00:08:28.036 --rc genhtml_function_coverage=1 00:08:28.036 --rc genhtml_legend=1 00:08:28.036 --rc geninfo_all_blocks=1 00:08:28.036 --rc geninfo_unexecuted_blocks=1 00:08:28.036 00:08:28.036 ' 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:28.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.036 --rc genhtml_branch_coverage=1 00:08:28.036 --rc genhtml_function_coverage=1 00:08:28.036 --rc genhtml_legend=1 00:08:28.036 --rc geninfo_all_blocks=1 00:08:28.036 --rc geninfo_unexecuted_blocks=1 00:08:28.036 00:08:28.036 ' 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.036 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:28.295 09:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:33.563 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:33.563 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:33.563 Found net devices under 0000:af:00.0: cvl_0_0 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:33.563 Found net devices under 0000:af:00.1: cvl_0_1 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.563 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.822 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.822 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:33.822 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.822 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.822 09:19:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.822 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:33.822 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:33.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:08:33.822 00:08:33.822 --- 10.0.0.2 ping statistics --- 00:08:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.822 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:08:33.822 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:08:33.822 00:08:33.822 --- 10.0.0.1 ping statistics --- 00:08:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.822 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:33.822 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.822 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:33.822 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.822 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.822 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.822 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.822 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.822 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.822 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.822 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:33.822 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:33.822 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.823 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:33.823 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3213056 00:08:33.823 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.823 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3213056 00:08:33.823 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3213056 ']' 00:08:33.823 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.823 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.823 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.823 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.823 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:33.823 [2024-12-13 09:19:46.111831] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:08:33.823 [2024-12-13 09:19:46.111876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.823 [2024-12-13 09:19:46.180111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.081 [2024-12-13 09:19:46.222670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.081 [2024-12-13 09:19:46.222704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.081 [2024-12-13 09:19:46.222715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.081 [2024-12-13 09:19:46.222721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.081 [2024-12-13 09:19:46.222726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.081 [2024-12-13 09:19:46.224156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.081 [2024-12-13 09:19:46.224256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.081 [2024-12-13 09:19:46.224362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.081 [2024-12-13 09:19:46.224363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.081 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.081 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:34.081 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:34.081 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:34.081 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:34.081 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.081 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:34.338 [2024-12-13 09:19:46.523149] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.338 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:34.596 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:34.596 09:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:34.854 09:19:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:34.854 09:19:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:35.112 09:19:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:35.112 09:19:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:35.112 09:19:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:35.112 09:19:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:35.370 09:19:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:35.627 09:19:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:35.627 09:19:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:35.884 09:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:35.884 09:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:36.142 09:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:36.142 09:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:36.142 09:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:36.399 09:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:36.399 09:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:36.657 09:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:36.657 09:19:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:36.914 09:19:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.914 [2024-12-13 09:19:49.263560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.171 09:19:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:37.171 09:19:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:37.428 09:19:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:38.800 09:19:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:38.800 09:19:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:38.800 09:19:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:38.800 09:19:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:38.800 09:19:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:38.800 09:19:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:40.698 09:19:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:40.698 09:19:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:40.698 09:19:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:40.698 09:19:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:40.698 09:19:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:40.698 09:19:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:40.698 09:19:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:40.698 [global] 00:08:40.698 thread=1 00:08:40.698 invalidate=1 00:08:40.698 rw=write 00:08:40.698 time_based=1 00:08:40.698 runtime=1 00:08:40.698 ioengine=libaio 00:08:40.698 direct=1 00:08:40.698 bs=4096 00:08:40.698 iodepth=1 00:08:40.698 norandommap=0 00:08:40.698 numjobs=1 00:08:40.698 00:08:40.698 verify_dump=1 00:08:40.698 verify_backlog=512 00:08:40.698 verify_state_save=0 00:08:40.698 do_verify=1 00:08:40.698 verify=crc32c-intel 00:08:40.698 [job0] 00:08:40.698 filename=/dev/nvme0n1 00:08:40.698 [job1] 00:08:40.698 filename=/dev/nvme0n2 00:08:40.698 [job2] 00:08:40.698 filename=/dev/nvme0n3 00:08:40.698 [job3] 00:08:40.698 filename=/dev/nvme0n4 00:08:40.698 Could not set queue depth (nvme0n1) 00:08:40.698 Could not set queue depth (nvme0n2) 00:08:40.698 Could not set queue depth (nvme0n3) 00:08:40.698 Could not set queue depth (nvme0n4) 00:08:40.955 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:40.955 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:40.955 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:40.955 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:40.955 fio-3.35 00:08:40.955 Starting 4 threads 00:08:42.355 00:08:42.355 job0: (groupid=0, jobs=1): err= 0: pid=3214376: Fri Dec 13 09:19:54 2024 00:08:42.355 read: IOPS=2155, BW=8623KiB/s (8830kB/s)(8632KiB/1001msec) 00:08:42.355 slat (nsec): min=6650, max=24887, avg=7774.98, stdev=1132.40 00:08:42.355 clat (usec): min=176, max=682, avg=235.47, stdev=28.78 00:08:42.355 lat (usec): min=183, max=691, avg=243.24, stdev=28.79 00:08:42.355 clat percentiles (usec): 00:08:42.355 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 217], 00:08:42.355 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:08:42.355 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 273], 00:08:42.355 | 99.00th=[ 330], 99.50th=[ 424], 99.90th=[ 461], 99.95th=[ 461], 00:08:42.355 | 99.99th=[ 685] 00:08:42.355 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:42.355 slat (usec): min=9, max=23199, avg=20.19, stdev=458.29 00:08:42.355 clat (usec): min=109, max=944, avg=160.86, stdev=30.25 00:08:42.355 lat (usec): min=119, max=23375, avg=181.06, stdev=459.61 00:08:42.355 clat percentiles (usec): 00:08:42.355 | 1.00th=[ 113], 5.00th=[ 119], 10.00th=[ 129], 20.00th=[ 145], 00:08:42.355 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:08:42.355 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 194], 00:08:42.355 | 99.00th=[ 227], 99.50th=[ 251], 99.90th=[ 570], 99.95th=[ 652], 00:08:42.355 | 99.99th=[ 947] 00:08:42.355 bw ( KiB/s): min=10192, max=10192, per=32.57%, avg=10192.00, stdev= 0.00, samples=1 00:08:42.355 iops : min= 2548, max= 2548, avg=2548.00, stdev= 0.00, samples=1 00:08:42.355 lat (usec) : 250=90.10%, 500=9.81%, 750=0.06%, 1000=0.02% 00:08:42.355 cpu : usr=3.40%, sys=3.70%, ctx=4721, majf=0, minf=1 00:08:42.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:42.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.355 issued rwts: total=2158,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:42.355 job1: (groupid=0, jobs=1): err= 0: pid=3214377: Fri Dec 13 09:19:54 2024 00:08:42.355 read: IOPS=1528, BW=6113KiB/s (6260kB/s)(6156KiB/1007msec) 00:08:42.355 slat (nsec): min=8019, max=43927, avg=10946.27, stdev=4431.23 00:08:42.355 clat (usec): min=190, max=41043, avg=338.41, stdev=1791.22 00:08:42.355 lat (usec): min=199, max=41056, avg=349.36, stdev=1791.32 00:08:42.355 clat percentiles (usec): 00:08:42.355 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 237], 00:08:42.355 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 262], 00:08:42.355 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 322], 00:08:42.355 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[41157], 99.95th=[41157], 00:08:42.355 | 99.99th=[41157] 00:08:42.355 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:08:42.355 slat (usec): min=11, max=294, avg=15.58, stdev= 9.26 00:08:42.355 clat (usec): min=98, max=348, avg=207.18, stdev=27.60 00:08:42.355 lat (usec): min=158, max=393, avg=222.76, stdev=28.03 00:08:42.355 clat percentiles (usec): 00:08:42.355 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 184], 00:08:42.355 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 206], 60.00th=[ 215], 00:08:42.355 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 251], 00:08:42.355 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 322], 99.95th=[ 347], 00:08:42.355 | 99.99th=[ 351] 00:08:42.355 bw ( KiB/s): min= 8192, max= 8208, per=26.20%, avg=8200.00, stdev=11.31, samples=2 00:08:42.355 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:08:42.355 lat (usec) : 100=0.03%, 250=70.53%, 500=29.36% 00:08:42.355 lat (msec) : 50=0.08% 00:08:42.355 cpu : usr=3.08%, sys=6.86%, ctx=3587, majf=0, minf=2 00:08:42.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:42.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.355 issued rwts: total=1539,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:42.356 job2: (groupid=0, jobs=1): err= 0: pid=3214379: Fri Dec 13 09:19:54 2024 00:08:42.356 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:08:42.356 slat (nsec): min=4592, max=28763, avg=7955.00, stdev=1980.85 00:08:42.356 clat (usec): min=182, max=41022, avg=428.72, stdev=2775.45 00:08:42.356 lat (usec): min=190, max=41035, avg=436.67, stdev=2776.20 00:08:42.356 clat percentiles (usec): 00:08:42.356 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:08:42.356 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:08:42.356 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 273], 95.00th=[ 289], 00:08:42.356 | 99.00th=[ 388], 99.50th=[17957], 99.90th=[41157], 99.95th=[41157], 00:08:42.356 | 99.99th=[41157] 00:08:42.356 write: IOPS=1850, BW=7401KiB/s (7578kB/s)(7408KiB/1001msec); 0 zone resets 00:08:42.356 slat (nsec): min=9743, max=38588, avg=11115.38, stdev=1747.69 00:08:42.356 clat (usec): min=121, max=291, avg=162.39, stdev=16.13 00:08:42.356 lat (usec): min=132, max=309, avg=173.50, stdev=16.30 00:08:42.356 clat percentiles (usec): 00:08:42.356 | 1.00th=[ 128], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 151], 00:08:42.356 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:08:42.356 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 188], 00:08:42.356 | 99.00th=[ 204], 99.50th=[ 215], 99.90th=[ 273], 99.95th=[ 293], 00:08:42.356 | 99.99th=[ 293] 00:08:42.356 bw ( KiB/s): min= 4568, max= 4568, per=14.60%, avg=4568.00, stdev= 0.00, samples=1 00:08:42.356 iops : min= 1142, max= 1142, avg=1142.00, stdev= 0.00, samples=1 00:08:42.356 lat (usec) : 250=91.88%, 500=7.82%, 750=0.06% 00:08:42.356 lat (msec) : 20=0.03%, 50=0.21% 00:08:42.356 cpu : usr=1.80%, sys=3.30%, ctx=3389, majf=0, minf=1 00:08:42.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:42.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.356 issued rwts: total=1536,1852,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:42.356 job3: (groupid=0, jobs=1): err= 0: pid=3214380: Fri Dec 13 09:19:54 2024 00:08:42.356 read: IOPS=1356, BW=5425KiB/s (5555kB/s)(5544KiB/1022msec) 00:08:42.356 slat (nsec): min=4150, max=23493, avg=7214.31, stdev=2262.06 00:08:42.356 clat (usec): min=196, max=42306, avg=486.08, stdev=3024.90 00:08:42.356 lat (usec): min=201, max=42313, avg=493.29, stdev=3025.23 00:08:42.356 clat percentiles (usec): 00:08:42.356 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 223], 00:08:42.356 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 247], 00:08:42.356 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 314], 00:08:42.356 | 99.00th=[ 437], 99.50th=[40633], 99.90th=[41157], 99.95th=[42206], 00:08:42.356 | 99.99th=[42206] 00:08:42.356 write: IOPS=1502, BW=6012KiB/s (6156kB/s)(6144KiB/1022msec); 0 zone resets 00:08:42.356 slat (usec): min=5, max=22877, avg=25.66, stdev=583.46 00:08:42.356 clat (usec): min=132, max=385, avg=189.77, stdev=27.78 00:08:42.356 lat (usec): min=138, max=23188, avg=215.43, stdev=587.22 00:08:42.356 clat percentiles (usec): 00:08:42.356 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 165], 00:08:42.356 | 30.00th=[ 174], 40.00th=[ 182], 50.00th=[ 190], 60.00th=[ 196], 00:08:42.356 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 237], 00:08:42.356 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 371], 99.95th=[ 388], 00:08:42.356 | 99.99th=[ 388] 00:08:42.356 bw ( KiB/s): min= 4096, max= 8192, per=19.63%, avg=6144.00, stdev=2896.31, samples=2 00:08:42.356 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:08:42.356 lat (usec) : 250=82.38%, 500=17.32% 00:08:42.356 lat (msec) : 50=0.31% 00:08:42.356 cpu : usr=0.98%, sys=3.62%, ctx=2924, majf=0, minf=1 00:08:42.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:42.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.356 issued rwts: total=1386,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:42.356 00:08:42.356 Run status group 0 (all jobs): 00:08:42.356 READ: bw=25.3MiB/s (26.5MB/s), 5425KiB/s-8623KiB/s (5555kB/s-8830kB/s), io=25.9MiB (27.1MB), run=1001-1022msec 00:08:42.356 WRITE: bw=30.6MiB/s (32.0MB/s), 6012KiB/s-9.99MiB/s (6156kB/s-10.5MB/s), io=31.2MiB (32.8MB), run=1001-1022msec 00:08:42.356 00:08:42.356 Disk stats (read/write): 00:08:42.356 nvme0n1: ios=1917/2048, merge=0/0, ticks=1420/331, in_queue=1751, util=98.19% 00:08:42.356 nvme0n2: ios=1553/1908, merge=0/0, ticks=401/385, in_queue=786, util=87.08% 00:08:42.356 nvme0n3: ios=1356/1536, merge=0/0, ticks=1077/237, in_queue=1314, util=98.54% 00:08:42.356 nvme0n4: ios=1220/1536, merge=0/0, ticks=1491/282, in_queue=1773, util=98.42% 00:08:42.356 09:19:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:42.356 [global] 00:08:42.356 thread=1 00:08:42.356 invalidate=1 00:08:42.356 rw=randwrite 00:08:42.356 time_based=1 00:08:42.356 runtime=1 00:08:42.356 ioengine=libaio 00:08:42.356 direct=1 00:08:42.356 bs=4096 00:08:42.356 iodepth=1 00:08:42.356 norandommap=0 00:08:42.356 numjobs=1 00:08:42.356 00:08:42.356 verify_dump=1 00:08:42.356 verify_backlog=512 00:08:42.356 verify_state_save=0 00:08:42.356 do_verify=1 00:08:42.356 verify=crc32c-intel 00:08:42.356 [job0] 00:08:42.356 filename=/dev/nvme0n1 00:08:42.356 [job1] 00:08:42.356 filename=/dev/nvme0n2 00:08:42.356 [job2] 00:08:42.356 filename=/dev/nvme0n3 00:08:42.356 [job3] 00:08:42.356 filename=/dev/nvme0n4 00:08:42.356 Could not set queue depth (nvme0n1) 00:08:42.356 Could not set queue depth (nvme0n2) 00:08:42.356 Could not set queue depth (nvme0n3) 00:08:42.356 Could not set queue depth (nvme0n4) 00:08:42.615 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:42.615 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:42.615 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:42.615 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:42.615 fio-3.35 00:08:42.615 Starting 4 threads 00:08:43.984 00:08:43.984 job0: (groupid=0, jobs=1): err= 0: pid=3214764: Fri Dec 13 09:19:55 2024 00:08:43.984 read: IOPS=26, BW=106KiB/s (109kB/s)(108KiB/1018msec) 00:08:43.984 slat (nsec): min=8760, max=24704, avg=20161.74, stdev=5335.13 00:08:43.984 clat (usec): min=368, max=41086, avg=33464.92, stdev=16033.51 00:08:43.984 lat (usec): min=378, max=41108, avg=33485.08, stdev=16038.23 00:08:43.984 clat percentiles (usec): 00:08:43.984 | 1.00th=[ 367], 5.00th=[ 433], 10.00th=[ 494], 20.00th=[40633], 00:08:43.984 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:43.984 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:43.984 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:43.984 | 99.99th=[41157] 00:08:43.984 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:08:43.984 slat (nsec): min=10491, max=39254, avg=12657.91, stdev=2271.26 00:08:43.984 clat (usec): min=135, max=373, avg=204.85, stdev=38.05 00:08:43.984 lat (usec): min=150, max=387, avg=217.51, stdev=37.96 00:08:43.984 clat percentiles (usec): 00:08:43.984 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 176], 00:08:43.984 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 208], 00:08:43.984 | 70.00th=[ 219], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 281], 00:08:43.984 | 99.00th=[ 347], 99.50th=[ 367], 99.90th=[ 375], 99.95th=[ 375], 00:08:43.984 | 99.99th=[ 375] 00:08:43.984 bw ( KiB/s): min= 4096, max= 4096, per=25.70%, avg=4096.00, stdev= 0.00, samples=1 00:08:43.984 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:43.984 lat (usec) : 250=86.64%, 500=8.91%, 750=0.37% 00:08:43.984 lat (msec) : 50=4.08% 00:08:43.984 cpu : usr=0.79%, sys=0.69%, ctx=541, majf=0, minf=1 00:08:43.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.984 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.984 job1: (groupid=0, jobs=1): err= 0: pid=3214781: Fri Dec 13 09:19:55 2024 00:08:43.984 read: IOPS=71, BW=288KiB/s (295kB/s)(296KiB/1028msec) 00:08:43.984 slat (nsec): min=6620, max=29191, avg=12349.30, stdev=7501.44 00:08:43.984 clat (usec): min=184, max=42144, avg=12450.85, stdev=18925.25 00:08:43.984 lat (usec): min=192, max=42168, avg=12463.20, stdev=18932.69 00:08:43.984 clat percentiles (usec): 00:08:43.984 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 206], 00:08:43.984 | 30.00th=[ 215], 40.00th=[ 229], 50.00th=[ 245], 60.00th=[ 253], 00:08:43.984 | 70.00th=[ 277], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:08:43.984 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:43.984 | 99.99th=[42206] 00:08:43.984 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:08:43.984 slat (nsec): min=9433, max=38335, avg=12596.27, stdev=3254.42 00:08:43.984 clat (usec): min=124, max=366, avg=188.80, stdev=39.39 00:08:43.984 lat (usec): min=143, max=383, avg=201.40, stdev=40.95 00:08:43.984 clat percentiles (usec): 00:08:43.984 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:08:43.984 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 190], 00:08:43.984 | 70.00th=[ 212], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 253], 00:08:43.984 | 99.00th=[ 302], 99.50th=[ 330], 99.90th=[ 367], 99.95th=[ 367], 00:08:43.984 | 99.99th=[ 367] 00:08:43.984 bw ( KiB/s): min= 4096, max= 4096, per=25.70%, avg=4096.00, stdev= 0.00, samples=1 00:08:43.984 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:43.984 lat (usec) : 250=89.42%, 500=6.83% 00:08:43.984 lat (msec) : 50=3.75% 00:08:43.984 cpu : usr=0.39%, sys=0.58%, ctx=587, majf=0, minf=1 00:08:43.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.984 issued rwts: total=74,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.984 job2: (groupid=0, jobs=1): err= 0: pid=3214801: Fri Dec 13 09:19:55 2024 00:08:43.984 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:08:43.984 slat (nsec): min=9860, max=23315, avg=20118.45, stdev=4835.01 00:08:43.984 clat (usec): min=40855, max=41985, avg=41125.07, stdev=357.69 00:08:43.984 lat (usec): min=40877, max=42008, avg=41145.19, stdev=357.58 00:08:43.984 clat percentiles (usec): 00:08:43.984 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:08:43.984 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:43.984 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:08:43.984 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:43.984 | 99.99th=[42206] 00:08:43.984 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:08:43.984 slat (nsec): min=9435, max=38253, avg=10589.83, stdev=1572.38 00:08:43.984 clat (usec): min=145, max=332, avg=180.43, stdev=16.81 00:08:43.985 lat (usec): min=155, max=370, avg=191.02, stdev=17.52 00:08:43.985 clat percentiles (usec): 00:08:43.985 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 167], 00:08:43.985 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:08:43.985 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 206], 00:08:43.985 | 99.00th=[ 225], 99.50th=[ 241], 99.90th=[ 334], 99.95th=[ 334], 00:08:43.985 | 99.99th=[ 334] 00:08:43.985 bw ( KiB/s): min= 4096, max= 4096, per=25.70%, avg=4096.00, stdev= 0.00, samples=1 00:08:43.985 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:43.985 lat (usec) : 250=95.51%, 500=0.37% 00:08:43.985 lat (msec) : 50=4.12% 00:08:43.985 cpu : usr=0.10%, sys=0.70%, ctx=534, majf=0, minf=1 00:08:43.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.985 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.985 job3: (groupid=0, jobs=1): err= 0: pid=3214807: Fri Dec 13 09:19:55 2024 00:08:43.985 read: IOPS=2063, BW=8256KiB/s (8454kB/s)(8264KiB/1001msec) 00:08:43.985 slat (nsec): min=3696, max=45704, avg=8379.08, stdev=1583.30 00:08:43.985 clat (usec): min=193, max=41467, avg=257.48, stdev=907.33 00:08:43.985 lat (usec): min=201, max=41472, avg=265.85, stdev=907.26 00:08:43.985 clat percentiles (usec): 00:08:43.985 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 221], 00:08:43.985 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 241], 00:08:43.985 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 269], 00:08:43.985 | 99.00th=[ 297], 99.50th=[ 351], 99.90th=[ 437], 99.95th=[ 461], 00:08:43.985 | 99.99th=[41681] 00:08:43.985 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:43.985 slat (nsec): min=10427, max=38131, avg=11912.82, stdev=1945.69 00:08:43.985 clat (usec): min=120, max=436, avg=158.89, stdev=26.99 00:08:43.985 lat (usec): min=131, max=466, avg=170.80, stdev=27.86 00:08:43.985 clat percentiles (usec): 00:08:43.985 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:08:43.985 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 155], 00:08:43.985 | 70.00th=[ 161], 80.00th=[ 169], 90.00th=[ 190], 95.00th=[ 217], 00:08:43.985 | 99.00th=[ 265], 99.50th=[ 289], 99.90th=[ 326], 99.95th=[ 334], 00:08:43.985 | 99.99th=[ 437] 00:08:43.985 bw ( KiB/s): min= 8976, max= 8976, per=56.32%, avg=8976.00, stdev= 0.00, samples=1 00:08:43.985 iops : min= 2244, max= 2244, avg=2244.00, stdev= 0.00, samples=1 00:08:43.985 lat (usec) : 250=88.72%, 500=11.26% 00:08:43.985 lat (msec) : 50=0.02% 00:08:43.985 cpu : usr=5.20%, sys=6.20%, ctx=4627, majf=0, minf=1 00:08:43.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.985 issued rwts: total=2066,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.985 00:08:43.985 Run status group 0 (all jobs): 00:08:43.985 READ: bw=8518KiB/s (8722kB/s), 87.6KiB/s-8256KiB/s (89.7kB/s-8454kB/s), io=8756KiB (8966kB), run=1001-1028msec 00:08:43.985 WRITE: bw=15.6MiB/s (16.3MB/s), 1992KiB/s-9.99MiB/s (2040kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1028msec 00:08:43.985 00:08:43.985 Disk stats (read/write): 00:08:43.985 nvme0n1: ios=48/512, merge=0/0, ticks=1725/96, in_queue=1821, util=98.10% 00:08:43.985 nvme0n2: ios=97/512, merge=0/0, ticks=1030/91, in_queue=1121, util=97.97% 00:08:43.985 nvme0n3: ios=43/512, merge=0/0, ticks=921/91, in_queue=1012, util=90.53% 00:08:43.985 nvme0n4: ios=1876/2048, merge=0/0, ticks=696/317, in_queue=1013, util=98.32% 00:08:43.985 09:19:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:43.985 [global] 00:08:43.985 thread=1 00:08:43.985 invalidate=1 00:08:43.985 rw=write 00:08:43.985 time_based=1 00:08:43.985 runtime=1 00:08:43.985 ioengine=libaio 00:08:43.985 direct=1 00:08:43.985 bs=4096 00:08:43.985 iodepth=128 00:08:43.985 norandommap=0 00:08:43.985 numjobs=1 00:08:43.985 00:08:43.985 verify_dump=1 00:08:43.985 verify_backlog=512 00:08:43.985 verify_state_save=0 00:08:43.985 do_verify=1 00:08:43.985 verify=crc32c-intel 00:08:43.985 [job0] 00:08:43.985 filename=/dev/nvme0n1 00:08:43.985 [job1] 00:08:43.985 filename=/dev/nvme0n2 00:08:43.985 [job2] 00:08:43.985 filename=/dev/nvme0n3 00:08:43.985 [job3] 00:08:43.985 filename=/dev/nvme0n4 00:08:43.985 Could not set queue depth (nvme0n1) 00:08:43.985 Could not set queue depth (nvme0n2) 00:08:43.985 Could not set queue depth (nvme0n3) 00:08:43.985 Could not set queue depth (nvme0n4) 00:08:43.985 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:43.985 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:43.985 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:43.985 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:43.985 fio-3.35 00:08:43.985 Starting 4 threads 00:08:45.354 00:08:45.354 job0: (groupid=0, jobs=1): err= 0: pid=3215254: Fri Dec 13 09:19:57 2024 00:08:45.354 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:08:45.354 slat (nsec): min=1155, max=11837k, avg=117937.24, stdev=839714.96 00:08:45.354 clat (usec): min=2705, max=65605, avg=14767.81, stdev=7487.38 00:08:45.354 lat (usec): min=2711, max=65612, avg=14885.75, stdev=7582.97 00:08:45.354 clat percentiles (usec): 00:08:45.354 | 1.00th=[ 5342], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[11469], 00:08:45.354 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12780], 60.00th=[13566], 00:08:45.354 | 70.00th=[15664], 80.00th=[15926], 90.00th=[18220], 95.00th=[27132], 00:08:45.354 | 99.00th=[52167], 99.50th=[58983], 99.90th=[65274], 99.95th=[65799], 00:08:45.354 | 99.99th=[65799] 00:08:45.354 write: IOPS=3712, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1010msec); 0 zone resets 00:08:45.354 slat (usec): min=2, max=9693, avg=121.16, stdev=644.05 00:08:45.354 clat (usec): min=324, max=81312, avg=20083.07, stdev=14554.38 00:08:45.354 lat (usec): min=1014, max=81315, avg=20204.24, stdev=14622.71 00:08:45.354 clat percentiles (usec): 00:08:45.354 | 1.00th=[ 1762], 5.00th=[ 4490], 10.00th=[ 6718], 20.00th=[ 9765], 00:08:45.354 | 30.00th=[10683], 40.00th=[12256], 50.00th=[13304], 60.00th=[19530], 00:08:45.354 | 70.00th=[23725], 80.00th=[30278], 90.00th=[44827], 95.00th=[52167], 00:08:45.354 | 99.00th=[58983], 99.50th=[71828], 99.90th=[79168], 99.95th=[79168], 00:08:45.354 | 99.99th=[81265] 00:08:45.354 bw ( KiB/s): min=12592, max=16384, per=20.65%, avg=14488.00, stdev=2681.35, samples=2 00:08:45.354 iops : min= 3148, max= 4096, avg=3622.00, stdev=670.34, samples=2 00:08:45.354 lat (usec) : 500=0.01% 00:08:45.354 lat (msec) : 2=0.70%, 4=1.47%, 10=14.78%, 20=58.73%, 50=20.08% 00:08:45.354 lat (msec) : 100=4.23% 00:08:45.355 cpu : usr=2.78%, sys=3.37%, ctx=411, majf=0, minf=1 00:08:45.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:08:45.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:45.355 issued rwts: total=3584,3750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:45.355 job1: (groupid=0, jobs=1): err= 0: pid=3215271: Fri Dec 13 09:19:57 2024 00:08:45.355 read: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec) 00:08:45.355 slat (nsec): min=1431, max=17074k, avg=89385.51, stdev=688308.54 00:08:45.355 clat (usec): min=3549, max=31775, avg=11202.31, stdev=3014.70 00:08:45.355 lat (usec): min=3556, max=31783, avg=11291.69, stdev=3067.08 00:08:45.355 clat percentiles (usec): 00:08:45.355 | 1.00th=[ 4817], 5.00th=[ 8225], 10.00th=[ 9110], 20.00th=[ 9634], 00:08:45.355 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10159], 60.00th=[10421], 00:08:45.355 | 70.00th=[11207], 80.00th=[12780], 90.00th=[15270], 95.00th=[17171], 00:08:45.355 | 99.00th=[22938], 99.50th=[25297], 99.90th=[31851], 99.95th=[31851], 00:08:45.355 | 99.99th=[31851] 00:08:45.355 write: IOPS=5809, BW=22.7MiB/s (23.8MB/s)(22.9MiB/1009msec); 0 zone resets 00:08:45.355 slat (usec): min=2, max=40878, avg=78.39, stdev=756.05 00:08:45.355 clat (usec): min=947, max=55859, avg=9630.29, stdev=2374.45 00:08:45.355 lat (usec): min=961, max=80180, avg=9708.68, stdev=2569.64 00:08:45.355 clat percentiles (usec): 00:08:45.355 | 1.00th=[ 3294], 5.00th=[ 5080], 10.00th=[ 6128], 20.00th=[ 8291], 00:08:45.355 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:08:45.355 | 70.00th=[10290], 80.00th=[10421], 90.00th=[11600], 95.00th=[13435], 00:08:45.355 | 99.00th=[15926], 99.50th=[16057], 99.90th=[19530], 99.95th=[26608], 00:08:45.355 | 99.99th=[55837] 00:08:45.355 bw ( KiB/s): min=20848, max=25032, per=32.69%, avg=22940.00, stdev=2958.53, samples=2 00:08:45.355 iops : min= 5212, max= 6258, avg=5735.00, stdev=739.63, samples=2 00:08:45.355 lat (usec) : 1000=0.03% 00:08:45.355 lat (msec) : 4=1.07%, 10=39.35%, 20=58.64%, 50=0.90%, 100=0.01% 00:08:45.355 cpu : usr=5.06%, sys=6.45%, ctx=578, majf=0, minf=1 00:08:45.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:08:45.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:45.355 issued rwts: total=5632,5862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:45.355 job2: (groupid=0, jobs=1): err= 0: pid=3215289: Fri Dec 13 09:19:57 2024 00:08:45.355 read: IOPS=3063, BW=12.0MiB/s (12.5MB/s)(12.5MiB/1044msec) 00:08:45.355 slat (nsec): min=1293, max=22487k, avg=129444.83, stdev=974912.37 00:08:45.355 clat (usec): min=5956, max=75557, avg=18221.88, stdev=10197.74 00:08:45.355 lat (usec): min=5959, max=75561, avg=18351.33, stdev=10261.98 00:08:45.355 clat percentiles (usec): 00:08:45.355 | 1.00th=[ 6063], 5.00th=[10290], 10.00th=[11207], 20.00th=[12780], 00:08:45.355 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14615], 60.00th=[15401], 00:08:45.355 | 70.00th=[16319], 80.00th=[22414], 90.00th=[30278], 95.00th=[41157], 00:08:45.355 | 99.00th=[61080], 99.50th=[63701], 99.90th=[76022], 99.95th=[76022], 00:08:45.355 | 99.99th=[76022] 00:08:45.355 write: IOPS=3432, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1044msec); 0 zone resets 00:08:45.355 slat (usec): min=2, max=24216, avg=156.88, stdev=1095.63 00:08:45.355 clat (usec): min=646, max=75563, avg=20648.27, stdev=13213.79 00:08:45.355 lat (usec): min=800, max=75577, avg=20805.15, stdev=13308.15 00:08:45.355 clat percentiles (usec): 00:08:45.355 | 1.00th=[ 3916], 5.00th=[ 5866], 10.00th=[ 6521], 20.00th=[12911], 00:08:45.355 | 30.00th=[13698], 40.00th=[14222], 50.00th=[15008], 60.00th=[20055], 00:08:45.355 | 70.00th=[23462], 80.00th=[30278], 90.00th=[40109], 95.00th=[50070], 00:08:45.355 | 99.00th=[64226], 99.50th=[66847], 99.90th=[69731], 99.95th=[69731], 00:08:45.355 | 99.99th=[76022] 00:08:45.355 bw ( KiB/s): min=13496, max=15160, per=20.42%, avg=14328.00, stdev=1176.63, samples=2 00:08:45.355 iops : min= 3374, max= 3790, avg=3582.00, stdev=294.16, samples=2 00:08:45.355 lat (usec) : 750=0.01% 00:08:45.355 lat (msec) : 4=0.75%, 10=9.73%, 20=57.67%, 50=28.21%, 100=3.63% 00:08:45.355 cpu : usr=2.40%, sys=3.36%, ctx=270, majf=0, minf=1 00:08:45.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:08:45.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:45.355 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:45.355 job3: (groupid=0, jobs=1): err= 0: pid=3215294: Fri Dec 13 09:19:57 2024 00:08:45.355 read: IOPS=4845, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1003msec) 00:08:45.355 slat (nsec): min=1460, max=6722.0k, avg=98716.62, stdev=553818.14 00:08:45.355 clat (usec): min=787, max=24959, avg=12287.28, stdev=2316.76 00:08:45.355 lat (usec): min=2311, max=24984, avg=12386.00, stdev=2360.36 00:08:45.355 clat percentiles (usec): 00:08:45.355 | 1.00th=[ 6587], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[11076], 00:08:45.355 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:08:45.355 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14746], 95.00th=[16909], 00:08:45.355 | 99.00th=[19792], 99.50th=[20317], 99.90th=[22152], 99.95th=[22938], 00:08:45.355 | 99.99th=[25035] 00:08:45.355 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:08:45.355 slat (usec): min=2, max=24380, avg=95.14, stdev=568.98 00:08:45.355 clat (usec): min=5636, max=32995, avg=12509.43, stdev=3475.76 00:08:45.355 lat (usec): min=5643, max=33000, avg=12604.57, stdev=3516.29 00:08:45.355 clat percentiles (usec): 00:08:45.355 | 1.00th=[ 5800], 5.00th=[ 8979], 10.00th=[10552], 20.00th=[11207], 00:08:45.355 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:08:45.355 | 70.00th=[12125], 80.00th=[13304], 90.00th=[15401], 95.00th=[19268], 00:08:45.355 | 99.00th=[27657], 99.50th=[30802], 99.90th=[32900], 99.95th=[32900], 00:08:45.355 | 99.99th=[32900] 00:08:45.355 bw ( KiB/s): min=20480, max=20480, per=29.18%, avg=20480.00, stdev= 0.00, samples=2 00:08:45.355 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:08:45.355 lat (usec) : 1000=0.01% 00:08:45.355 lat (msec) : 4=0.07%, 10=9.18%, 20=88.02%, 50=2.73% 00:08:45.355 cpu : usr=3.59%, sys=7.39%, ctx=563, majf=0, minf=1 00:08:45.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:45.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:45.355 issued rwts: total=4860,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:45.355 00:08:45.355 Run status group 0 (all jobs): 00:08:45.355 READ: bw=64.6MiB/s (67.8MB/s), 12.0MiB/s-21.8MiB/s (12.5MB/s-22.9MB/s), io=67.5MiB (70.8MB), run=1003-1044msec 00:08:45.355 WRITE: bw=68.5MiB/s (71.9MB/s), 13.4MiB/s-22.7MiB/s (14.1MB/s-23.8MB/s), io=71.5MiB (75.0MB), run=1003-1044msec 00:08:45.355 00:08:45.355 Disk stats (read/write): 00:08:45.355 nvme0n1: ios=2610/3071, merge=0/0, ticks=37310/66075, in_queue=103385, util=86.87% 00:08:45.355 nvme0n2: ios=4627/5015, merge=0/0, ticks=50178/46680, in_queue=96858, util=95.52% 00:08:45.355 nvme0n3: ios=3045/3072, merge=0/0, ticks=28924/30497, in_queue=59421, util=98.23% 00:08:45.355 nvme0n4: ios=4156/4343, merge=0/0, ticks=25553/26238, in_queue=51791, util=99.79% 00:08:45.355 09:19:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:45.355 [global] 00:08:45.355 thread=1 00:08:45.355 invalidate=1 00:08:45.355 rw=randwrite 00:08:45.355 time_based=1 00:08:45.355 runtime=1 00:08:45.355 ioengine=libaio 00:08:45.355 direct=1 00:08:45.355 bs=4096 00:08:45.355 iodepth=128 00:08:45.355 norandommap=0 00:08:45.355 numjobs=1 00:08:45.355 00:08:45.355 verify_dump=1 00:08:45.355 verify_backlog=512 00:08:45.355 verify_state_save=0 00:08:45.355 do_verify=1 00:08:45.355 verify=crc32c-intel 00:08:45.355 [job0] 00:08:45.355 filename=/dev/nvme0n1 00:08:45.355 [job1] 00:08:45.355 filename=/dev/nvme0n2 00:08:45.355 [job2] 00:08:45.355 filename=/dev/nvme0n3 00:08:45.355 [job3] 00:08:45.355 filename=/dev/nvme0n4 00:08:45.355 Could not set queue depth (nvme0n1) 00:08:45.355 Could not set queue depth (nvme0n2) 00:08:45.355 Could not set queue depth (nvme0n3) 00:08:45.355 Could not set queue depth (nvme0n4) 00:08:45.612 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:45.612 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:45.612 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:45.612 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:45.612 fio-3.35 00:08:45.612 Starting 4 threads 00:08:46.982 00:08:46.982 job0: (groupid=0, jobs=1): err= 0: pid=3215691: Fri Dec 13 09:19:59 2024 00:08:46.982 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:08:46.982 slat (nsec): min=1485, max=9546.3k, avg=108545.24, stdev=660495.84 00:08:46.982 clat (usec): min=3297, max=50301, avg=13692.56, stdev=6754.12 00:08:46.982 lat (usec): min=3300, max=50327, avg=13801.11, stdev=6810.56 00:08:46.982 clat percentiles (usec): 00:08:46.982 | 1.00th=[ 3556], 5.00th=[ 6652], 10.00th=[ 9372], 20.00th=[10421], 00:08:46.982 | 30.00th=[10945], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:08:46.982 | 70.00th=[12649], 80.00th=[13304], 90.00th=[24249], 95.00th=[29230], 00:08:46.982 | 99.00th=[40633], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:08:46.982 | 99.99th=[50070] 00:08:46.982 write: IOPS=4615, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1006msec); 0 zone resets 00:08:46.982 slat (usec): min=2, max=11636, avg=101.18, stdev=566.10 00:08:46.982 clat (usec): min=1530, max=45521, avg=13889.54, stdev=6274.03 00:08:46.982 lat (usec): min=1567, max=45543, avg=13990.72, stdev=6323.65 00:08:46.982 clat percentiles (usec): 00:08:46.982 | 1.00th=[ 3458], 5.00th=[ 8356], 10.00th=[ 9634], 20.00th=[10552], 00:08:46.982 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:08:46.982 | 70.00th=[12518], 80.00th=[14353], 90.00th=[21365], 95.00th=[28967], 00:08:46.982 | 99.00th=[38536], 99.50th=[38536], 99.90th=[40109], 99.95th=[40109], 00:08:46.982 | 99.99th=[45351] 00:08:46.982 bw ( KiB/s): min=16384, max=20480, per=25.49%, avg=18432.00, stdev=2896.31, samples=2 00:08:46.982 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:08:46.982 lat (msec) : 2=0.01%, 4=1.64%, 10=11.12%, 20=73.28%, 50=13.93% 00:08:46.982 lat (msec) : 100=0.01% 00:08:46.982 cpu : usr=4.98%, sys=5.37%, ctx=431, majf=0, minf=1 00:08:46.982 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:46.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.982 issued rwts: total=4608,4643,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.982 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.982 job1: (groupid=0, jobs=1): err= 0: pid=3215692: Fri Dec 13 09:19:59 2024 00:08:46.982 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:08:46.982 slat (nsec): min=1109, max=16291k, avg=104247.56, stdev=704011.27 00:08:46.982 clat (usec): min=4247, max=37923, avg=13143.28, stdev=4436.52 00:08:46.982 lat (usec): min=4260, max=37930, avg=13247.53, stdev=4494.48 00:08:46.982 clat percentiles (usec): 00:08:46.982 | 1.00th=[ 7635], 5.00th=[ 8717], 10.00th=[ 9896], 20.00th=[10290], 00:08:46.982 | 30.00th=[10945], 40.00th=[11600], 50.00th=[11863], 60.00th=[12387], 00:08:46.982 | 70.00th=[13173], 80.00th=[14746], 90.00th=[19530], 95.00th=[23725], 00:08:46.982 | 99.00th=[30016], 99.50th=[31065], 99.90th=[38011], 99.95th=[38011], 00:08:46.982 | 99.99th=[38011] 00:08:46.982 write: IOPS=4328, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1007msec); 0 zone resets 00:08:46.982 slat (nsec): min=1702, max=23526k, avg=126559.82, stdev=793018.27 00:08:46.982 clat (usec): min=3771, max=61627, avg=16865.02, stdev=9857.18 00:08:46.982 lat (usec): min=4196, max=61631, avg=16991.58, stdev=9929.27 00:08:46.982 clat percentiles (usec): 00:08:46.982 | 1.00th=[ 7177], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[11207], 00:08:46.982 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12256], 60.00th=[13566], 00:08:46.982 | 70.00th=[16909], 80.00th=[20579], 90.00th=[32637], 95.00th=[40633], 00:08:46.983 | 99.00th=[56886], 99.50th=[59507], 99.90th=[61604], 99.95th=[61604], 00:08:46.983 | 99.99th=[61604] 00:08:46.983 bw ( KiB/s): min=16376, max=17472, per=23.40%, avg=16924.00, stdev=774.99, samples=2 00:08:46.983 iops : min= 4094, max= 4368, avg=4231.00, stdev=193.75, samples=2 00:08:46.983 lat (msec) : 4=0.01%, 10=9.14%, 20=73.83%, 50=16.07%, 100=0.95% 00:08:46.983 cpu : usr=2.68%, sys=4.67%, ctx=403, majf=0, minf=1 00:08:46.983 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:46.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.983 issued rwts: total=4096,4359,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.983 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.983 job2: (groupid=0, jobs=1): err= 0: pid=3215693: Fri Dec 13 09:19:59 2024 00:08:46.983 read: IOPS=4586, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:08:46.983 slat (nsec): min=1360, max=24112k, avg=122082.11, stdev=924870.56 00:08:46.983 clat (usec): min=709, max=66450, avg=15691.70, stdev=9061.93 00:08:46.983 lat (usec): min=2392, max=66470, avg=15813.78, stdev=9132.75 00:08:46.983 clat percentiles (usec): 00:08:46.983 | 1.00th=[ 5276], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[ 9765], 00:08:46.983 | 30.00th=[11207], 40.00th=[12387], 50.00th=[13435], 60.00th=[13698], 00:08:46.983 | 70.00th=[15401], 80.00th=[18220], 90.00th=[25297], 95.00th=[40633], 00:08:46.983 | 99.00th=[50594], 99.50th=[50594], 99.90th=[58459], 99.95th=[58459], 00:08:46.983 | 99.99th=[66323] 00:08:46.983 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:08:46.983 slat (usec): min=2, max=15124, avg=80.85, stdev=480.29 00:08:46.983 clat (usec): min=603, max=28374, avg=11953.19, stdev=3917.62 00:08:46.983 lat (usec): min=610, max=28406, avg=12034.03, stdev=3961.06 00:08:46.983 clat percentiles (usec): 00:08:46.983 | 1.00th=[ 2114], 5.00th=[ 3818], 10.00th=[ 6652], 20.00th=[ 9372], 00:08:46.983 | 30.00th=[ 9634], 40.00th=[11469], 50.00th=[13304], 60.00th=[13698], 00:08:46.983 | 70.00th=[13829], 80.00th=[13960], 90.00th=[15926], 95.00th=[18220], 00:08:46.983 | 99.00th=[22152], 99.50th=[22938], 99.90th=[25297], 99.95th=[25297], 00:08:46.983 | 99.99th=[28443] 00:08:46.983 bw ( KiB/s): min=18424, max=18440, per=25.49%, avg=18432.00, stdev=11.31, samples=2 00:08:46.983 iops : min= 4606, max= 4610, avg=4608.00, stdev= 2.83, samples=2 00:08:46.983 lat (usec) : 750=0.10%, 1000=0.01% 00:08:46.983 lat (msec) : 2=0.39%, 4=2.49%, 10=25.89%, 20=60.62%, 50=9.80% 00:08:46.983 lat (msec) : 100=0.71% 00:08:46.983 cpu : usr=3.59%, sys=4.79%, ctx=542, majf=0, minf=1 00:08:46.983 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:46.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.983 issued rwts: total=4600,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.983 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.983 job3: (groupid=0, jobs=1): err= 0: pid=3215694: Fri Dec 13 09:19:59 2024 00:08:46.983 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:08:46.983 slat (nsec): min=1232, max=13704k, avg=118503.36, stdev=864257.59 00:08:46.983 clat (usec): min=4117, max=71851, avg=15438.95, stdev=7377.93 00:08:46.983 lat (usec): min=4127, max=71946, avg=15557.45, stdev=7427.06 00:08:46.983 clat percentiles (usec): 00:08:46.983 | 1.00th=[ 7439], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[10945], 00:08:46.983 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13173], 60.00th=[13698], 00:08:46.983 | 70.00th=[14877], 80.00th=[18744], 90.00th=[22938], 95.00th=[29230], 00:08:46.983 | 99.00th=[44303], 99.50th=[47449], 99.90th=[71828], 99.95th=[71828], 00:08:46.983 | 99.99th=[71828] 00:08:46.983 write: IOPS=4573, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1005msec); 0 zone resets 00:08:46.983 slat (usec): min=2, max=24637, avg=104.57, stdev=766.91 00:08:46.983 clat (usec): min=1565, max=36027, avg=13919.16, stdev=4668.26 00:08:46.983 lat (usec): min=3532, max=36051, avg=14023.73, stdev=4704.93 00:08:46.983 clat percentiles (usec): 00:08:46.983 | 1.00th=[ 4621], 5.00th=[ 7308], 10.00th=[ 8455], 20.00th=[11207], 00:08:46.983 | 30.00th=[11863], 40.00th=[12387], 50.00th=[13173], 60.00th=[13698], 00:08:46.983 | 70.00th=[13960], 80.00th=[17957], 90.00th=[20841], 95.00th=[23987], 00:08:46.983 | 99.00th=[26870], 99.50th=[26870], 99.90th=[29230], 99.95th=[29230], 00:08:46.983 | 99.99th=[35914] 00:08:46.983 bw ( KiB/s): min=15280, max=20464, per=24.71%, avg=17872.00, stdev=3665.64, samples=2 00:08:46.983 iops : min= 3820, max= 5116, avg=4468.00, stdev=916.41, samples=2 00:08:46.983 lat (msec) : 2=0.01%, 4=0.15%, 10=10.64%, 20=76.16%, 50=12.90% 00:08:46.983 lat (msec) : 100=0.14% 00:08:46.983 cpu : usr=3.69%, sys=5.98%, ctx=393, majf=0, minf=1 00:08:46.983 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:46.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.983 issued rwts: total=4096,4596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.983 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.983 00:08:46.983 Run status group 0 (all jobs): 00:08:46.983 READ: bw=67.5MiB/s (70.8MB/s), 15.9MiB/s-17.9MiB/s (16.7MB/s-18.8MB/s), io=68.0MiB (71.3MB), run=1003-1007msec 00:08:46.983 WRITE: bw=70.6MiB/s (74.1MB/s), 16.9MiB/s-18.0MiB/s (17.7MB/s-18.9MB/s), io=71.1MiB (74.6MB), run=1003-1007msec 00:08:46.983 00:08:46.983 Disk stats (read/write): 00:08:46.983 nvme0n1: ios=4082/4118, merge=0/0, ticks=22092/29367, in_queue=51459, util=86.46% 00:08:46.983 nvme0n2: ios=3421/3584, merge=0/0, ticks=20628/26119, in_queue=46747, util=83.28% 00:08:46.983 nvme0n3: ios=3378/3584, merge=0/0, ticks=34304/24551, in_queue=58855, util=87.15% 00:08:46.983 nvme0n4: ios=3600/3960, merge=0/0, ticks=47015/51105, in_queue=98120, util=97.91% 00:08:46.983 09:19:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:46.983 09:19:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3215914 00:08:46.983 09:19:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:46.983 09:19:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:46.983 [global] 00:08:46.983 thread=1 00:08:46.983 invalidate=1 00:08:46.983 rw=read 00:08:46.983 time_based=1 00:08:46.983 runtime=10 00:08:46.983 ioengine=libaio 00:08:46.983 direct=1 00:08:46.983 bs=4096 00:08:46.983 iodepth=1 00:08:46.983 norandommap=1 00:08:46.983 numjobs=1 00:08:46.983 00:08:46.983 [job0] 00:08:46.983 filename=/dev/nvme0n1 00:08:46.983 [job1] 00:08:46.983 filename=/dev/nvme0n2 00:08:46.983 [job2] 00:08:46.983 filename=/dev/nvme0n3 00:08:46.983 [job3] 00:08:46.983 filename=/dev/nvme0n4 00:08:46.983 Could not set queue depth (nvme0n1) 00:08:46.983 Could not set queue depth (nvme0n2) 00:08:46.983 Could not set queue depth (nvme0n3) 00:08:46.983 Could not set queue depth (nvme0n4) 00:08:47.240 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.240 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.240 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.240 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.240 fio-3.35 00:08:47.240 Starting 4 threads 00:08:50.515 09:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:50.515 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3768320, buflen=4096 00:08:50.515 fio: pid=3216063, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:50.515 09:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:50.515 09:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:50.515 09:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:50.515 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11055104, buflen=4096 00:08:50.515 fio: pid=3216062, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:50.515 09:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:50.515 09:20:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:50.515 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1241088, buflen=4096 00:08:50.515 fio: pid=3216060, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:50.772 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=6123520, buflen=4096 00:08:50.772 fio: pid=3216061, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:08:50.772 09:20:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:50.772 09:20:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:50.772 00:08:50.772 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3216060: Fri Dec 13 09:20:03 2024 00:08:50.772 read: IOPS=96, BW=383KiB/s (393kB/s)(1212KiB/3162msec) 00:08:50.772 slat (usec): min=7, max=8746, avg=40.65, stdev=501.02 00:08:50.772 clat (usec): min=186, max=41529, avg=10321.29, stdev=17602.60 00:08:50.772 lat (usec): min=194, max=49999, avg=10362.00, stdev=17666.64 00:08:50.772 clat percentiles (usec): 00:08:50.772 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 215], 00:08:50.772 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 253], 60.00th=[ 273], 00:08:50.772 | 70.00th=[ 306], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:50.772 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:08:50.772 | 99.99th=[41681] 00:08:50.772 bw ( KiB/s): min= 96, max= 1896, per=6.15%, avg=399.00, stdev=733.39, samples=6 00:08:50.772 iops : min= 24, max= 474, avg=99.67, stdev=183.39, samples=6 00:08:50.772 lat (usec) : 250=48.68%, 500=25.99%, 750=0.33% 00:08:50.772 lat (msec) : 50=24.67% 00:08:50.772 cpu : usr=0.00%, sys=0.28%, ctx=308, majf=0, minf=1 00:08:50.772 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.772 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.772 issued rwts: total=304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.772 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.772 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3216061: Fri Dec 13 09:20:03 2024 00:08:50.772 read: IOPS=448, BW=1791KiB/s (1834kB/s)(5980KiB/3339msec) 00:08:50.772 slat (usec): min=2, max=6394, avg=19.59, stdev=266.28 00:08:50.772 clat (usec): min=199, max=42111, avg=2211.83, stdev=8749.32 00:08:50.772 lat (usec): min=205, max=47019, avg=2227.15, stdev=8789.04 00:08:50.772 clat percentiles (usec): 00:08:50.772 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 231], 00:08:50.772 | 30.00th=[ 235], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 243], 00:08:50.772 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 947], 00:08:50.772 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:50.772 | 99.99th=[42206] 00:08:50.772 bw ( KiB/s): min= 93, max=11408, per=30.54%, avg=1982.17, stdev=4617.70, samples=6 00:08:50.772 iops : min= 23, max= 2852, avg=495.50, stdev=1154.44, samples=6 00:08:50.772 lat (usec) : 250=78.21%, 500=16.51%, 750=0.13%, 1000=0.13% 00:08:50.772 lat (msec) : 2=0.07%, 4=0.07%, 50=4.81% 00:08:50.772 cpu : usr=0.06%, sys=0.69%, ctx=1498, majf=0, minf=2 00:08:50.772 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.772 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.772 issued rwts: total=1496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.772 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.772 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3216062: Fri Dec 13 09:20:03 2024 00:08:50.772 read: IOPS=916, BW=3666KiB/s (3754kB/s)(10.5MiB/2945msec) 00:08:50.772 slat (nsec): min=7177, max=77891, avg=9663.95, stdev=2849.94 00:08:50.772 clat (usec): min=193, max=42095, avg=1071.47, stdev=5785.86 00:08:50.772 lat (usec): min=201, max=42119, avg=1081.12, stdev=5788.07 00:08:50.772 clat percentiles (usec): 00:08:50.772 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:08:50.773 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 237], 00:08:50.773 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 293], 00:08:50.773 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:08:50.773 | 99.99th=[42206] 00:08:50.773 bw ( KiB/s): min= 96, max=15872, per=66.26%, avg=4300.80, stdev=6856.06, samples=5 00:08:50.773 iops : min= 24, max= 3968, avg=1075.20, stdev=1714.01, samples=5 00:08:50.773 lat (usec) : 250=80.93%, 500=16.81%, 750=0.19% 00:08:50.773 lat (msec) : 50=2.04% 00:08:50.773 cpu : usr=0.48%, sys=1.70%, ctx=2702, majf=0, minf=2 00:08:50.773 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.773 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.773 issued rwts: total=2700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.773 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.773 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3216063: Fri Dec 13 09:20:03 2024 00:08:50.773 read: IOPS=338, BW=1353KiB/s (1385kB/s)(3680KiB/2720msec) 00:08:50.773 slat (nsec): min=6963, max=32313, avg=8921.60, stdev=3957.82 00:08:50.773 clat (usec): min=176, max=41985, avg=2917.69, stdev=10173.80 00:08:50.773 lat (usec): min=184, max=42007, avg=2926.61, stdev=10177.41 00:08:50.773 clat percentiles (usec): 00:08:50.773 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 194], 00:08:50.773 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 206], 00:08:50.773 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[41157], 00:08:50.773 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:50.773 | 99.99th=[42206] 00:08:50.773 bw ( KiB/s): min= 96, max= 112, per=1.53%, avg=99.20, stdev= 7.16, samples=5 00:08:50.773 iops : min= 24, max= 28, avg=24.80, stdev= 1.79, samples=5 00:08:50.773 lat (usec) : 250=92.51%, 500=0.65% 00:08:50.773 lat (msec) : 10=0.11%, 50=6.62% 00:08:50.773 cpu : usr=0.22%, sys=0.51%, ctx=921, majf=0, minf=2 00:08:50.773 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.773 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.773 issued rwts: total=921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.773 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.773 00:08:50.773 Run status group 0 (all jobs): 00:08:50.773 READ: bw=6489KiB/s (6645kB/s), 383KiB/s-3666KiB/s (393kB/s-3754kB/s), io=21.2MiB (22.2MB), run=2720-3339msec 00:08:50.773 00:08:50.773 Disk stats (read/write): 00:08:50.773 nvme0n1: ios=338/0, merge=0/0, ticks=3641/0, in_queue=3641, util=99.51% 00:08:50.773 nvme0n2: ios=1489/0, merge=0/0, ticks=3053/0, in_queue=3053, util=96.01% 00:08:50.773 nvme0n3: ios=2739/0, merge=0/0, ticks=3699/0, in_queue=3699, util=99.66% 00:08:50.773 nvme0n4: ios=500/0, merge=0/0, ticks=2599/0, in_queue=2599, util=96.48% 00:08:51.030 09:20:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:51.030 09:20:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:51.287 09:20:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:51.287 09:20:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:51.544 09:20:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:51.544 09:20:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:51.544 09:20:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:51.544 09:20:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:51.801 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:51.801 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3215914 00:08:51.801 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:51.801 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:52.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.058 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:52.058 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:08:52.058 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:52.058 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:52.058 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:52.058 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:52.058 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:08:52.058 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:52.058 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:52.058 nvmf hotplug test: fio failed as expected 00:08:52.058 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:52.058 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:52.058 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:52.316 rmmod nvme_tcp 00:08:52.316 rmmod nvme_fabrics 00:08:52.316 rmmod nvme_keyring 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3213056 ']' 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3213056 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3213056 ']' 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3213056 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3213056 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3213056' 00:08:52.316 killing process with pid 3213056 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3213056 00:08:52.316 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3213056 00:08:52.574 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:52.574 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:52.574 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:52.574 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:08:52.574 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:08:52.574 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:08:52.574 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:52.574 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:52.574 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:52.575 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.575 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.575 09:20:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.484 09:20:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:54.484 00:08:54.484 real 0m26.570s 00:08:54.484 user 1m47.061s 00:08:54.484 sys 0m8.053s 00:08:54.484 09:20:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.484 09:20:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:54.484 ************************************ 00:08:54.484 END TEST nvmf_fio_target 00:08:54.484 ************************************ 00:08:54.484 09:20:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:54.484 09:20:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:54.484 09:20:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.484 09:20:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:54.742 ************************************ 00:08:54.742 START TEST nvmf_bdevio 00:08:54.742 ************************************ 00:08:54.742 09:20:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:54.742 * Looking for test storage... 00:08:54.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.742 09:20:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:54.742 09:20:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:08:54.742 09:20:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:54.742 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:54.742 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.742 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.742 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.742 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.742 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.742 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.742 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.742 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.742 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.742 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.742 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.742 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:08:54.742 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:08:54.742 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:54.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.743 --rc genhtml_branch_coverage=1 00:08:54.743 --rc genhtml_function_coverage=1 00:08:54.743 --rc genhtml_legend=1 00:08:54.743 --rc geninfo_all_blocks=1 00:08:54.743 --rc geninfo_unexecuted_blocks=1 00:08:54.743 00:08:54.743 ' 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:54.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.743 --rc genhtml_branch_coverage=1 00:08:54.743 --rc genhtml_function_coverage=1 00:08:54.743 --rc genhtml_legend=1 00:08:54.743 --rc geninfo_all_blocks=1 00:08:54.743 --rc geninfo_unexecuted_blocks=1 00:08:54.743 00:08:54.743 ' 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:54.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.743 --rc genhtml_branch_coverage=1 00:08:54.743 --rc genhtml_function_coverage=1 00:08:54.743 --rc genhtml_legend=1 00:08:54.743 --rc geninfo_all_blocks=1 00:08:54.743 --rc geninfo_unexecuted_blocks=1 00:08:54.743 00:08:54.743 ' 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:54.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.743 --rc genhtml_branch_coverage=1 00:08:54.743 --rc genhtml_function_coverage=1 00:08:54.743 --rc genhtml_legend=1 00:08:54.743 --rc geninfo_all_blocks=1 00:08:54.743 --rc geninfo_unexecuted_blocks=1 00:08:54.743 00:08:54.743 ' 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:54.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:54.743 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:08:54.744 09:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:00.007 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:00.007 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.007 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:00.008 Found net devices under 0000:af:00.0: cvl_0_0 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:00.008 Found net devices under 0000:af:00.1: cvl_0_1 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:00.008 09:20:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:00.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:09:00.008 00:09:00.008 --- 10.0.0.2 ping statistics --- 00:09:00.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.008 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:09:00.008 00:09:00.008 --- 10.0.0.1 ping statistics --- 00:09:00.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.008 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3220227 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3220227 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3220227 ']' 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.008 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.008 [2024-12-13 09:20:12.178246] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:09:00.008 [2024-12-13 09:20:12.178289] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.008 [2024-12-13 09:20:12.244403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.008 [2024-12-13 09:20:12.285902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.008 [2024-12-13 09:20:12.285939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.008 [2024-12-13 09:20:12.285947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.008 [2024-12-13 09:20:12.285956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.008 [2024-12-13 09:20:12.285961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.008 [2024-12-13 09:20:12.287499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:00.008 [2024-12-13 09:20:12.287606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:00.008 [2024-12-13 09:20:12.287712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.008 [2024-12-13 09:20:12.287713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.286 [2024-12-13 09:20:12.428888] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.286 Malloc0 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.286 [2024-12-13 09:20:12.495513] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:00.286 { 00:09:00.286 "params": { 00:09:00.286 "name": "Nvme$subsystem", 00:09:00.286 "trtype": "$TEST_TRANSPORT", 00:09:00.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.286 "adrfam": "ipv4", 00:09:00.286 "trsvcid": "$NVMF_PORT", 00:09:00.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.286 "hdgst": ${hdgst:-false}, 00:09:00.286 "ddgst": ${ddgst:-false} 00:09:00.286 }, 00:09:00.286 "method": "bdev_nvme_attach_controller" 00:09:00.286 } 00:09:00.286 EOF 00:09:00.286 )") 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:00.286 09:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:00.286 "params": { 00:09:00.286 "name": "Nvme1", 00:09:00.286 "trtype": "tcp", 00:09:00.286 "traddr": "10.0.0.2", 00:09:00.286 "adrfam": "ipv4", 00:09:00.286 "trsvcid": "4420", 00:09:00.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.286 "hdgst": false, 00:09:00.286 "ddgst": false 00:09:00.286 }, 00:09:00.286 "method": "bdev_nvme_attach_controller" 00:09:00.286 }' 00:09:00.286 [2024-12-13 09:20:12.548803] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:09:00.286 [2024-12-13 09:20:12.548847] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3220253 ] 00:09:00.286 [2024-12-13 09:20:12.614298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:00.553 [2024-12-13 09:20:12.658580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.553 [2024-12-13 09:20:12.658598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.553 [2024-12-13 09:20:12.658601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.817 I/O targets: 00:09:00.817 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:00.817 00:09:00.817 00:09:00.817 CUnit - A unit testing framework for C - Version 2.1-3 00:09:00.817 http://cunit.sourceforge.net/ 00:09:00.817 00:09:00.817 00:09:00.817 Suite: bdevio tests on: Nvme1n1 00:09:00.817 Test: blockdev write read block ...passed 00:09:00.817 Test: blockdev write zeroes read block ...passed 00:09:00.817 Test: blockdev write zeroes read no split ...passed 00:09:00.817 Test: blockdev write zeroes read split ...passed 00:09:00.817 Test: blockdev write zeroes read split partial ...passed 00:09:00.817 Test: blockdev reset ...[2024-12-13 09:20:13.086751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:00.817 [2024-12-13 09:20:13.086809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c4610 (9): Bad file descriptor 00:09:00.817 [2024-12-13 09:20:13.143567] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:00.817 passed 00:09:00.817 Test: blockdev write read 8 blocks ...passed 00:09:00.817 Test: blockdev write read size > 128k ...passed 00:09:00.817 Test: blockdev write read invalid size ...passed 00:09:01.074 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:01.074 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:01.074 Test: blockdev write read max offset ...passed 00:09:01.074 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:01.074 Test: blockdev writev readv 8 blocks ...passed 00:09:01.074 Test: blockdev writev readv 30 x 1block ...passed 00:09:01.074 Test: blockdev writev readv block ...passed 00:09:01.074 Test: blockdev writev readv size > 128k ...passed 00:09:01.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:01.075 Test: blockdev comparev and writev ...[2024-12-13 09:20:13.315183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:01.075 [2024-12-13 09:20:13.315215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:01.075 [2024-12-13 09:20:13.315229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:01.075 [2024-12-13 09:20:13.315241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:01.075 [2024-12-13 09:20:13.315495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:01.075 [2024-12-13 09:20:13.315505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:01.075 [2024-12-13 09:20:13.315516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:01.075 [2024-12-13 09:20:13.315524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:01.075 [2024-12-13 09:20:13.315764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:01.075 [2024-12-13 09:20:13.315773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:01.075 [2024-12-13 09:20:13.315784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:01.075 [2024-12-13 09:20:13.315791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:01.075 [2024-12-13 09:20:13.316039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:01.075 [2024-12-13 09:20:13.316048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:01.075 [2024-12-13 09:20:13.316059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:01.075 [2024-12-13 09:20:13.316066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:01.075 passed 00:09:01.075 Test: blockdev nvme passthru rw ...passed 00:09:01.075 Test: blockdev nvme passthru vendor specific ...[2024-12-13 09:20:13.397806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:01.075 [2024-12-13 09:20:13.397825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:01.075 [2024-12-13 09:20:13.397936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:01.075 [2024-12-13 09:20:13.397946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:01.075 [2024-12-13 09:20:13.398051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:01.075 [2024-12-13 09:20:13.398060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:01.075 [2024-12-13 09:20:13.398164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:01.075 [2024-12-13 09:20:13.398173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:01.075 passed 00:09:01.075 Test: blockdev nvme admin passthru ...passed 00:09:01.332 Test: blockdev copy ...passed 00:09:01.332 00:09:01.332 Run Summary: Type Total Ran Passed Failed Inactive 00:09:01.332 suites 1 1 n/a 0 0 00:09:01.332 tests 23 23 23 0 0 00:09:01.332 asserts 152 152 152 0 n/a 00:09:01.332 00:09:01.332 Elapsed time = 1.051 seconds 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:01.332 rmmod nvme_tcp 00:09:01.332 rmmod nvme_fabrics 00:09:01.332 rmmod nvme_keyring 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3220227 ']' 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3220227 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3220227 ']' 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3220227 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.332 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3220227 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3220227' 00:09:01.590 killing process with pid 3220227 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3220227 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3220227 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.590 09:20:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.122 09:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:04.122 00:09:04.122 real 0m9.119s 00:09:04.122 user 0m10.102s 00:09:04.122 sys 0m4.244s 00:09:04.122 09:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.122 09:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:04.122 ************************************ 00:09:04.122 END TEST nvmf_bdevio 00:09:04.122 ************************************ 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:04.122 00:09:04.122 real 4m27.454s 00:09:04.122 user 10m13.794s 00:09:04.122 sys 1m32.188s 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:04.122 ************************************ 00:09:04.122 END TEST nvmf_target_core 00:09:04.122 ************************************ 00:09:04.122 09:20:16 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:04.122 09:20:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:04.122 09:20:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.122 09:20:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:04.122 ************************************ 00:09:04.122 START TEST nvmf_target_extra 00:09:04.122 ************************************ 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:04.122 * Looking for test storage... 00:09:04.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:04.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.122 --rc genhtml_branch_coverage=1 00:09:04.122 --rc genhtml_function_coverage=1 00:09:04.122 --rc genhtml_legend=1 00:09:04.122 --rc geninfo_all_blocks=1 00:09:04.122 --rc geninfo_unexecuted_blocks=1 00:09:04.122 00:09:04.122 ' 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:04.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.122 --rc genhtml_branch_coverage=1 00:09:04.122 --rc genhtml_function_coverage=1 00:09:04.122 --rc genhtml_legend=1 00:09:04.122 --rc geninfo_all_blocks=1 00:09:04.122 --rc geninfo_unexecuted_blocks=1 00:09:04.122 00:09:04.122 ' 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:04.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.122 --rc genhtml_branch_coverage=1 00:09:04.122 --rc genhtml_function_coverage=1 00:09:04.122 --rc genhtml_legend=1 00:09:04.122 --rc geninfo_all_blocks=1 00:09:04.122 --rc geninfo_unexecuted_blocks=1 00:09:04.122 00:09:04.122 ' 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:04.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.122 --rc genhtml_branch_coverage=1 00:09:04.122 --rc genhtml_function_coverage=1 00:09:04.122 --rc genhtml_legend=1 00:09:04.122 --rc geninfo_all_blocks=1 00:09:04.122 --rc geninfo_unexecuted_blocks=1 00:09:04.122 00:09:04.122 ' 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:04.122 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:04.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:04.123 ************************************ 00:09:04.123 START TEST nvmf_example 00:09:04.123 ************************************ 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:04.123 * Looking for test storage... 00:09:04.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:04.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.123 --rc genhtml_branch_coverage=1 00:09:04.123 --rc genhtml_function_coverage=1 00:09:04.123 --rc genhtml_legend=1 00:09:04.123 --rc geninfo_all_blocks=1 00:09:04.123 --rc geninfo_unexecuted_blocks=1 00:09:04.123 00:09:04.123 ' 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:04.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.123 --rc genhtml_branch_coverage=1 00:09:04.123 --rc genhtml_function_coverage=1 00:09:04.123 --rc genhtml_legend=1 00:09:04.123 --rc geninfo_all_blocks=1 00:09:04.123 --rc geninfo_unexecuted_blocks=1 00:09:04.123 00:09:04.123 ' 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:04.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.123 --rc genhtml_branch_coverage=1 00:09:04.123 --rc genhtml_function_coverage=1 00:09:04.123 --rc genhtml_legend=1 00:09:04.123 --rc geninfo_all_blocks=1 00:09:04.123 --rc geninfo_unexecuted_blocks=1 00:09:04.123 00:09:04.123 ' 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:04.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.123 --rc genhtml_branch_coverage=1 00:09:04.123 --rc genhtml_function_coverage=1 00:09:04.123 --rc genhtml_legend=1 00:09:04.123 --rc geninfo_all_blocks=1 00:09:04.123 --rc geninfo_unexecuted_blocks=1 00:09:04.123 00:09:04.123 ' 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.123 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.381 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:04.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:04.382 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:09.642 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:09.642 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:09.642 Found net devices under 0000:af:00.0: cvl_0_0 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:09.642 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:09.643 Found net devices under 0000:af:00.1: cvl_0_1 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:09.643 09:20:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.643 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.643 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.643 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:09.643 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:09.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:09:09.901 00:09:09.901 --- 10.0.0.2 ping statistics --- 00:09:09.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.901 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:09:09.901 00:09:09.901 --- 10.0.0.1 ping statistics --- 00:09:09.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.901 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3224012 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3224012 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3224012 ']' 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.901 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:10.832 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.832 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:10.832 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:10.832 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:10.832 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:10.832 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.832 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.832 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:10.832 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:10.832 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:23.015 Initializing NVMe Controllers 00:09:23.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:23.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:23.015 Initialization complete. Launching workers. 00:09:23.015 ======================================================== 00:09:23.015 Latency(us) 00:09:23.015 Device Information : IOPS MiB/s Average min max 00:09:23.015 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18275.04 71.39 3501.59 545.47 15508.99 00:09:23.015 ======================================================== 00:09:23.015 Total : 18275.04 71.39 3501.59 545.47 15508.99 00:09:23.015 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.015 rmmod nvme_tcp 00:09:23.015 rmmod nvme_fabrics 00:09:23.015 rmmod nvme_keyring 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3224012 ']' 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3224012 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3224012 ']' 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3224012 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3224012 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3224012' 00:09:23.015 killing process with pid 3224012 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3224012 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3224012 00:09:23.015 nvmf threads initialize successfully 00:09:23.015 bdev subsystem init successfully 00:09:23.015 created a nvmf target service 00:09:23.015 create targets's poll groups done 00:09:23.015 all subsystems of target started 00:09:23.015 nvmf target is running 00:09:23.015 all subsystems of target stopped 00:09:23.015 destroy targets's poll groups done 00:09:23.015 destroyed the nvmf target service 00:09:23.015 bdev subsystem finish successfully 00:09:23.015 nvmf threads destroy successfully 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.015 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.273 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:23.273 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:23.273 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.273 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.532 00:09:23.532 real 0m19.363s 00:09:23.532 user 0m46.043s 00:09:23.532 sys 0m5.627s 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.532 ************************************ 00:09:23.532 END TEST nvmf_example 00:09:23.532 ************************************ 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:23.532 ************************************ 00:09:23.532 START TEST nvmf_filesystem 00:09:23.532 ************************************ 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:23.532 * Looking for test storage... 00:09:23.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:23.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.532 --rc genhtml_branch_coverage=1 00:09:23.532 --rc genhtml_function_coverage=1 00:09:23.532 --rc genhtml_legend=1 00:09:23.532 --rc geninfo_all_blocks=1 00:09:23.532 --rc geninfo_unexecuted_blocks=1 00:09:23.532 00:09:23.532 ' 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:23.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.532 --rc genhtml_branch_coverage=1 00:09:23.532 --rc genhtml_function_coverage=1 00:09:23.532 --rc genhtml_legend=1 00:09:23.532 --rc geninfo_all_blocks=1 00:09:23.532 --rc geninfo_unexecuted_blocks=1 00:09:23.532 00:09:23.532 ' 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:23.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.532 --rc genhtml_branch_coverage=1 00:09:23.532 --rc genhtml_function_coverage=1 00:09:23.532 --rc genhtml_legend=1 00:09:23.532 --rc geninfo_all_blocks=1 00:09:23.532 --rc geninfo_unexecuted_blocks=1 00:09:23.532 00:09:23.532 ' 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:23.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.532 --rc genhtml_branch_coverage=1 00:09:23.532 --rc genhtml_function_coverage=1 00:09:23.532 --rc genhtml_legend=1 00:09:23.532 --rc geninfo_all_blocks=1 00:09:23.532 --rc geninfo_unexecuted_blocks=1 00:09:23.532 00:09:23.532 ' 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:23.532 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:23.533 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:23.794 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:23.795 #define SPDK_CONFIG_H 00:09:23.795 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:23.795 #define SPDK_CONFIG_APPS 1 00:09:23.795 #define SPDK_CONFIG_ARCH native 00:09:23.795 #undef SPDK_CONFIG_ASAN 00:09:23.795 #undef SPDK_CONFIG_AVAHI 00:09:23.795 #undef SPDK_CONFIG_CET 00:09:23.795 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:23.795 #define SPDK_CONFIG_COVERAGE 1 00:09:23.795 #define SPDK_CONFIG_CROSS_PREFIX 00:09:23.795 #undef SPDK_CONFIG_CRYPTO 00:09:23.795 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:23.795 #undef SPDK_CONFIG_CUSTOMOCF 00:09:23.795 #undef SPDK_CONFIG_DAOS 00:09:23.795 #define SPDK_CONFIG_DAOS_DIR 00:09:23.795 #define SPDK_CONFIG_DEBUG 1 00:09:23.795 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:23.795 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:23.795 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:23.795 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:23.795 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:23.795 #undef SPDK_CONFIG_DPDK_UADK 00:09:23.795 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:23.795 #define SPDK_CONFIG_EXAMPLES 1 00:09:23.795 #undef SPDK_CONFIG_FC 00:09:23.795 #define SPDK_CONFIG_FC_PATH 00:09:23.795 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:23.795 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:23.795 #define SPDK_CONFIG_FSDEV 1 00:09:23.795 #undef SPDK_CONFIG_FUSE 00:09:23.795 #undef SPDK_CONFIG_FUZZER 00:09:23.795 #define SPDK_CONFIG_FUZZER_LIB 00:09:23.795 #undef SPDK_CONFIG_GOLANG 00:09:23.795 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:23.795 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:23.795 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:23.795 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:23.795 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:23.795 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:23.795 #undef SPDK_CONFIG_HAVE_LZ4 00:09:23.795 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:23.795 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:23.795 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:23.795 #define SPDK_CONFIG_IDXD 1 00:09:23.795 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:23.795 #undef SPDK_CONFIG_IPSEC_MB 00:09:23.795 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:23.795 #define SPDK_CONFIG_ISAL 1 00:09:23.795 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:23.795 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:23.795 #define SPDK_CONFIG_LIBDIR 00:09:23.795 #undef SPDK_CONFIG_LTO 00:09:23.795 #define SPDK_CONFIG_MAX_LCORES 128 00:09:23.795 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:23.795 #define SPDK_CONFIG_NVME_CUSE 1 00:09:23.795 #undef SPDK_CONFIG_OCF 00:09:23.795 #define SPDK_CONFIG_OCF_PATH 00:09:23.795 #define SPDK_CONFIG_OPENSSL_PATH 00:09:23.795 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:23.795 #define SPDK_CONFIG_PGO_DIR 00:09:23.795 #undef SPDK_CONFIG_PGO_USE 00:09:23.795 #define SPDK_CONFIG_PREFIX /usr/local 00:09:23.795 #undef SPDK_CONFIG_RAID5F 00:09:23.795 #undef SPDK_CONFIG_RBD 00:09:23.795 #define SPDK_CONFIG_RDMA 1 00:09:23.795 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:23.795 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:23.795 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:23.795 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:23.795 #define SPDK_CONFIG_SHARED 1 00:09:23.795 #undef SPDK_CONFIG_SMA 00:09:23.795 #define SPDK_CONFIG_TESTS 1 00:09:23.795 #undef SPDK_CONFIG_TSAN 00:09:23.795 #define SPDK_CONFIG_UBLK 1 00:09:23.795 #define SPDK_CONFIG_UBSAN 1 00:09:23.795 #undef SPDK_CONFIG_UNIT_TESTS 00:09:23.795 #undef SPDK_CONFIG_URING 00:09:23.795 #define SPDK_CONFIG_URING_PATH 00:09:23.795 #undef SPDK_CONFIG_URING_ZNS 00:09:23.795 #undef SPDK_CONFIG_USDT 00:09:23.795 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:23.795 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:23.795 #define SPDK_CONFIG_VFIO_USER 1 00:09:23.795 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:23.795 #define SPDK_CONFIG_VHOST 1 00:09:23.795 #define SPDK_CONFIG_VIRTIO 1 00:09:23.795 #undef SPDK_CONFIG_VTUNE 00:09:23.795 #define SPDK_CONFIG_VTUNE_DIR 00:09:23.795 #define SPDK_CONFIG_WERROR 1 00:09:23.795 #define SPDK_CONFIG_WPDK_DIR 00:09:23.795 #undef SPDK_CONFIG_XNVME 00:09:23.795 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:23.795 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:23.796 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:23.797 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3226382 ]] 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3226382 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.3yRCXb 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.3yRCXb/tests/target /tmp/spdk.3yRCXb 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88965332992 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552405504 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6587072512 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47766171648 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087462400 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110481920 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23019520 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47775813632 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=389120 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:23.798 * Looking for test storage... 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.798 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88965332992 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8801665024 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:23.798 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:23.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.799 --rc genhtml_branch_coverage=1 00:09:23.799 --rc genhtml_function_coverage=1 00:09:23.799 --rc genhtml_legend=1 00:09:23.799 --rc geninfo_all_blocks=1 00:09:23.799 --rc geninfo_unexecuted_blocks=1 00:09:23.799 00:09:23.799 ' 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:23.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.799 --rc genhtml_branch_coverage=1 00:09:23.799 --rc genhtml_function_coverage=1 00:09:23.799 --rc genhtml_legend=1 00:09:23.799 --rc geninfo_all_blocks=1 00:09:23.799 --rc geninfo_unexecuted_blocks=1 00:09:23.799 00:09:23.799 ' 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:23.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.799 --rc genhtml_branch_coverage=1 00:09:23.799 --rc genhtml_function_coverage=1 00:09:23.799 --rc genhtml_legend=1 00:09:23.799 --rc geninfo_all_blocks=1 00:09:23.799 --rc geninfo_unexecuted_blocks=1 00:09:23.799 00:09:23.799 ' 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:23.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.799 --rc genhtml_branch_coverage=1 00:09:23.799 --rc genhtml_function_coverage=1 00:09:23.799 --rc genhtml_legend=1 00:09:23.799 --rc geninfo_all_blocks=1 00:09:23.799 --rc geninfo_unexecuted_blocks=1 00:09:23.799 00:09:23.799 ' 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.799 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:23.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:23.800 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:29.061 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:29.061 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.061 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:29.062 Found net devices under 0000:af:00.0: cvl_0_0 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:29.062 Found net devices under 0000:af:00.1: cvl_0_1 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.062 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.320 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.320 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.320 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:29.320 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.320 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.320 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.320 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:29.320 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:29.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:09:29.577 00:09:29.577 --- 10.0.0.2 ping statistics --- 00:09:29.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.577 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:09:29.577 00:09:29.577 --- 10.0.0.1 ping statistics --- 00:09:29.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.577 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.577 ************************************ 00:09:29.577 START TEST nvmf_filesystem_no_in_capsule 00:09:29.577 ************************************ 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3229545 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3229545 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3229545 ']' 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.577 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.577 [2024-12-13 09:20:41.827426] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:09:29.577 [2024-12-13 09:20:41.827469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.577 [2024-12-13 09:20:41.889772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.577 [2024-12-13 09:20:41.932774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.577 [2024-12-13 09:20:41.932811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.577 [2024-12-13 09:20:41.932818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.577 [2024-12-13 09:20:41.932823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.577 [2024-12-13 09:20:41.932829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.577 [2024-12-13 09:20:41.934315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.577 [2024-12-13 09:20:41.934341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.577 [2024-12-13 09:20:41.934427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.577 [2024-12-13 09:20:41.934429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.835 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.835 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:29.835 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:29.835 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:29.835 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.835 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.835 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:29.835 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:29.835 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.835 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.835 [2024-12-13 09:20:42.075146] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.835 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.835 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:29.835 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.835 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.092 Malloc1 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.092 [2024-12-13 09:20:42.239386] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.092 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:30.092 { 00:09:30.092 "name": "Malloc1", 00:09:30.092 "aliases": [ 00:09:30.092 "54549b11-e662-4f82-baab-36dac57e1d03" 00:09:30.092 ], 00:09:30.092 "product_name": "Malloc disk", 00:09:30.092 "block_size": 512, 00:09:30.092 "num_blocks": 1048576, 00:09:30.092 "uuid": "54549b11-e662-4f82-baab-36dac57e1d03", 00:09:30.092 "assigned_rate_limits": { 00:09:30.092 "rw_ios_per_sec": 0, 00:09:30.092 "rw_mbytes_per_sec": 0, 00:09:30.092 "r_mbytes_per_sec": 0, 00:09:30.092 "w_mbytes_per_sec": 0 00:09:30.092 }, 00:09:30.092 "claimed": true, 00:09:30.092 "claim_type": "exclusive_write", 00:09:30.092 "zoned": false, 00:09:30.092 "supported_io_types": { 00:09:30.092 "read": true, 00:09:30.092 "write": true, 00:09:30.092 "unmap": true, 00:09:30.092 "flush": true, 00:09:30.092 "reset": true, 00:09:30.092 "nvme_admin": false, 00:09:30.092 "nvme_io": false, 00:09:30.092 "nvme_io_md": false, 00:09:30.092 "write_zeroes": true, 00:09:30.093 "zcopy": true, 00:09:30.093 "get_zone_info": false, 00:09:30.093 "zone_management": false, 00:09:30.093 "zone_append": false, 00:09:30.093 "compare": false, 00:09:30.093 "compare_and_write": false, 00:09:30.093 "abort": true, 00:09:30.093 "seek_hole": false, 00:09:30.093 "seek_data": false, 00:09:30.093 "copy": true, 00:09:30.093 "nvme_iov_md": false 00:09:30.093 }, 00:09:30.093 "memory_domains": [ 00:09:30.093 { 00:09:30.093 "dma_device_id": "system", 00:09:30.093 "dma_device_type": 1 00:09:30.093 }, 00:09:30.093 { 00:09:30.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.093 "dma_device_type": 2 00:09:30.093 } 00:09:30.093 ], 00:09:30.093 "driver_specific": {} 00:09:30.093 } 00:09:30.093 ]' 00:09:30.093 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:30.093 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:30.093 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:30.093 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:30.093 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:30.093 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:30.093 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:30.093 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:31.462 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.462 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:31.462 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.462 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:31.462 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:33.359 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:33.924 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.856 ************************************ 00:09:34.856 START TEST filesystem_ext4 00:09:34.856 ************************************ 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:34.856 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:35.114 mke2fs 1.47.0 (5-Feb-2023) 00:09:35.114 Discarding device blocks: 0/522240 done 00:09:35.114 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:35.114 Filesystem UUID: 0523bfd7-f78e-4ad3-95ec-3b8d4268e849 00:09:35.114 Superblock backups stored on blocks: 00:09:35.114 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:35.114 00:09:35.114 Allocating group tables: 0/64 done 00:09:35.114 Writing inode tables: 0/64 done 00:09:36.047 Creating journal (8192 blocks): done 00:09:36.304 Writing superblocks and filesystem accounting information: 0/64 done 00:09:36.304 00:09:36.304 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:36.304 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3229545 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:42.855 00:09:42.855 real 0m7.171s 00:09:42.855 user 0m0.031s 00:09:42.855 sys 0m0.065s 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:42.855 ************************************ 00:09:42.855 END TEST filesystem_ext4 00:09:42.855 ************************************ 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:42.855 ************************************ 00:09:42.855 START TEST filesystem_btrfs 00:09:42.855 ************************************ 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:42.855 btrfs-progs v6.8.1 00:09:42.855 See https://btrfs.readthedocs.io for more information. 00:09:42.855 00:09:42.855 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:42.855 NOTE: several default settings have changed in version 5.15, please make sure 00:09:42.855 this does not affect your deployments: 00:09:42.855 - DUP for metadata (-m dup) 00:09:42.855 - enabled no-holes (-O no-holes) 00:09:42.855 - enabled free-space-tree (-R free-space-tree) 00:09:42.855 00:09:42.855 Label: (null) 00:09:42.855 UUID: cdf5cb22-929e-4aaf-835e-684d980fcc3f 00:09:42.855 Node size: 16384 00:09:42.855 Sector size: 4096 (CPU page size: 4096) 00:09:42.855 Filesystem size: 510.00MiB 00:09:42.855 Block group profiles: 00:09:42.855 Data: single 8.00MiB 00:09:42.855 Metadata: DUP 32.00MiB 00:09:42.855 System: DUP 8.00MiB 00:09:42.855 SSD detected: yes 00:09:42.855 Zoned device: no 00:09:42.855 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:42.855 Checksum: crc32c 00:09:42.855 Number of devices: 1 00:09:42.855 Devices: 00:09:42.855 ID SIZE PATH 00:09:42.855 1 510.00MiB /dev/nvme0n1p1 00:09:42.855 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:42.855 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:42.855 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:42.855 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:42.855 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:42.855 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:42.855 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:42.855 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:43.112 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3229545 00:09:43.112 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:43.112 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:43.112 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:43.112 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:43.112 00:09:43.112 real 0m0.789s 00:09:43.112 user 0m0.031s 00:09:43.112 sys 0m0.111s 00:09:43.112 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.112 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:43.112 ************************************ 00:09:43.112 END TEST filesystem_btrfs 00:09:43.112 ************************************ 00:09:43.112 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:43.112 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:43.112 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.112 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:43.112 ************************************ 00:09:43.112 START TEST filesystem_xfs 00:09:43.112 ************************************ 00:09:43.112 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:43.113 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:43.113 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:43.113 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:43.113 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:43.113 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:43.113 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:43.113 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:43.113 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:43.113 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:43.113 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:43.113 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:43.113 = sectsz=512 attr=2, projid32bit=1 00:09:43.113 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:43.113 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:43.113 data = bsize=4096 blocks=130560, imaxpct=25 00:09:43.113 = sunit=0 swidth=0 blks 00:09:43.113 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:43.113 log =internal log bsize=4096 blocks=16384, version=2 00:09:43.113 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:43.113 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:44.044 Discarding blocks...Done. 00:09:44.044 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:44.044 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:46.566 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:46.566 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:46.566 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:46.566 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:46.566 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:46.566 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:46.566 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3229545 00:09:46.566 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:46.566 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:46.566 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:46.566 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:46.566 00:09:46.566 real 0m3.538s 00:09:46.566 user 0m0.015s 00:09:46.566 sys 0m0.084s 00:09:46.566 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.566 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:46.566 ************************************ 00:09:46.566 END TEST filesystem_xfs 00:09:46.566 ************************************ 00:09:46.566 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:46.823 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:46.823 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:47.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3229545 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3229545 ']' 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3229545 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3229545 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.081 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.082 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3229545' 00:09:47.082 killing process with pid 3229545 00:09:47.082 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3229545 00:09:47.082 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3229545 00:09:47.339 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:47.339 00:09:47.339 real 0m17.927s 00:09:47.339 user 1m10.598s 00:09:47.339 sys 0m1.415s 00:09:47.339 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.339 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.339 ************************************ 00:09:47.339 END TEST nvmf_filesystem_no_in_capsule 00:09:47.339 ************************************ 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.598 ************************************ 00:09:47.598 START TEST nvmf_filesystem_in_capsule 00:09:47.598 ************************************ 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3232685 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3232685 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3232685 ']' 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.598 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.598 [2024-12-13 09:20:59.831296] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:09:47.598 [2024-12-13 09:20:59.831342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.598 [2024-12-13 09:20:59.897512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.598 [2024-12-13 09:20:59.939903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.598 [2024-12-13 09:20:59.939938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.598 [2024-12-13 09:20:59.939945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.598 [2024-12-13 09:20:59.939951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.598 [2024-12-13 09:20:59.939956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.598 [2024-12-13 09:20:59.941276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.598 [2024-12-13 09:20:59.941376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.598 [2024-12-13 09:20:59.941473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.598 [2024-12-13 09:20:59.941474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.856 [2024-12-13 09:21:00.088307] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.856 Malloc1 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.856 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:48.114 [2024-12-13 09:21:00.252610] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:48.114 { 00:09:48.114 "name": "Malloc1", 00:09:48.114 "aliases": [ 00:09:48.114 "3c026bd4-bddc-452f-ad48-633c94780224" 00:09:48.114 ], 00:09:48.114 "product_name": "Malloc disk", 00:09:48.114 "block_size": 512, 00:09:48.114 "num_blocks": 1048576, 00:09:48.114 "uuid": "3c026bd4-bddc-452f-ad48-633c94780224", 00:09:48.114 "assigned_rate_limits": { 00:09:48.114 "rw_ios_per_sec": 0, 00:09:48.114 "rw_mbytes_per_sec": 0, 00:09:48.114 "r_mbytes_per_sec": 0, 00:09:48.114 "w_mbytes_per_sec": 0 00:09:48.114 }, 00:09:48.114 "claimed": true, 00:09:48.114 "claim_type": "exclusive_write", 00:09:48.114 "zoned": false, 00:09:48.114 "supported_io_types": { 00:09:48.114 "read": true, 00:09:48.114 "write": true, 00:09:48.114 "unmap": true, 00:09:48.114 "flush": true, 00:09:48.114 "reset": true, 00:09:48.114 "nvme_admin": false, 00:09:48.114 "nvme_io": false, 00:09:48.114 "nvme_io_md": false, 00:09:48.114 "write_zeroes": true, 00:09:48.114 "zcopy": true, 00:09:48.114 "get_zone_info": false, 00:09:48.114 "zone_management": false, 00:09:48.114 "zone_append": false, 00:09:48.114 "compare": false, 00:09:48.114 "compare_and_write": false, 00:09:48.114 "abort": true, 00:09:48.114 "seek_hole": false, 00:09:48.114 "seek_data": false, 00:09:48.114 "copy": true, 00:09:48.114 "nvme_iov_md": false 00:09:48.114 }, 00:09:48.114 "memory_domains": [ 00:09:48.114 { 00:09:48.114 "dma_device_id": "system", 00:09:48.114 "dma_device_type": 1 00:09:48.114 }, 00:09:48.114 { 00:09:48.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.114 "dma_device_type": 2 00:09:48.114 } 00:09:48.114 ], 00:09:48.114 "driver_specific": {} 00:09:48.114 } 00:09:48.114 ]' 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:48.114 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:49.487 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:49.487 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:49.487 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:49.487 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:49.487 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:51.383 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:52.387 09:21:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.364 ************************************ 00:09:53.364 START TEST filesystem_in_capsule_ext4 00:09:53.364 ************************************ 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:53.364 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:53.364 mke2fs 1.47.0 (5-Feb-2023) 00:09:53.364 Discarding device blocks: 0/522240 done 00:09:53.364 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:53.364 Filesystem UUID: daac6d9e-82af-4139-84ca-957c80737b01 00:09:53.364 Superblock backups stored on blocks: 00:09:53.364 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:53.364 00:09:53.364 Allocating group tables: 0/64 done 00:09:53.364 Writing inode tables: 0/64 done 00:09:53.622 Creating journal (8192 blocks): done 00:09:54.994 Writing superblocks and filesystem accounting information: 0/64 done 00:09:54.994 00:09:54.994 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:54.994 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:01.561 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3232685 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:01.562 00:10:01.562 real 0m7.703s 00:10:01.562 user 0m0.023s 00:10:01.562 sys 0m0.076s 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:01.562 ************************************ 00:10:01.562 END TEST filesystem_in_capsule_ext4 00:10:01.562 ************************************ 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.562 ************************************ 00:10:01.562 START TEST filesystem_in_capsule_btrfs 00:10:01.562 ************************************ 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:01.562 btrfs-progs v6.8.1 00:10:01.562 See https://btrfs.readthedocs.io for more information. 00:10:01.562 00:10:01.562 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:01.562 NOTE: several default settings have changed in version 5.15, please make sure 00:10:01.562 this does not affect your deployments: 00:10:01.562 - DUP for metadata (-m dup) 00:10:01.562 - enabled no-holes (-O no-holes) 00:10:01.562 - enabled free-space-tree (-R free-space-tree) 00:10:01.562 00:10:01.562 Label: (null) 00:10:01.562 UUID: 7fa246e5-af82-45c1-85ee-532bafa96ff1 00:10:01.562 Node size: 16384 00:10:01.562 Sector size: 4096 (CPU page size: 4096) 00:10:01.562 Filesystem size: 510.00MiB 00:10:01.562 Block group profiles: 00:10:01.562 Data: single 8.00MiB 00:10:01.562 Metadata: DUP 32.00MiB 00:10:01.562 System: DUP 8.00MiB 00:10:01.562 SSD detected: yes 00:10:01.562 Zoned device: no 00:10:01.562 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:01.562 Checksum: crc32c 00:10:01.562 Number of devices: 1 00:10:01.562 Devices: 00:10:01.562 ID SIZE PATH 00:10:01.562 1 510.00MiB /dev/nvme0n1p1 00:10:01.562 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3232685 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:01.562 00:10:01.562 real 0m0.569s 00:10:01.562 user 0m0.023s 00:10:01.562 sys 0m0.114s 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:01.562 ************************************ 00:10:01.562 END TEST filesystem_in_capsule_btrfs 00:10:01.562 ************************************ 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.562 ************************************ 00:10:01.562 START TEST filesystem_in_capsule_xfs 00:10:01.562 ************************************ 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:01.562 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:01.821 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:01.821 = sectsz=512 attr=2, projid32bit=1 00:10:01.821 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:01.821 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:01.821 data = bsize=4096 blocks=130560, imaxpct=25 00:10:01.821 = sunit=0 swidth=0 blks 00:10:01.821 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:01.821 log =internal log bsize=4096 blocks=16384, version=2 00:10:01.821 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:01.821 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:02.754 Discarding blocks...Done. 00:10:02.754 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:02.754 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:05.281 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:05.281 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:05.281 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:05.281 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:05.281 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:05.281 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:05.281 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3232685 00:10:05.281 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:05.281 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:05.281 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:05.281 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:05.281 00:10:05.281 real 0m3.458s 00:10:05.281 user 0m0.015s 00:10:05.281 sys 0m0.082s 00:10:05.281 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.281 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:05.281 ************************************ 00:10:05.281 END TEST filesystem_in_capsule_xfs 00:10:05.281 ************************************ 00:10:05.281 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:05.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3232685 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3232685 ']' 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3232685 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3232685 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3232685' 00:10:05.540 killing process with pid 3232685 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3232685 00:10:05.540 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3232685 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:06.107 00:10:06.107 real 0m18.430s 00:10:06.107 user 1m12.586s 00:10:06.107 sys 0m1.426s 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.107 ************************************ 00:10:06.107 END TEST nvmf_filesystem_in_capsule 00:10:06.107 ************************************ 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.107 rmmod nvme_tcp 00:10:06.107 rmmod nvme_fabrics 00:10:06.107 rmmod nvme_keyring 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:06.107 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:06.108 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:06.108 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:06.108 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:06.108 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.108 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.108 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.641 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:08.641 00:10:08.641 real 0m44.658s 00:10:08.641 user 2m25.003s 00:10:08.641 sys 0m7.254s 00:10:08.641 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.641 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:08.641 ************************************ 00:10:08.641 END TEST nvmf_filesystem 00:10:08.641 ************************************ 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:08.642 ************************************ 00:10:08.642 START TEST nvmf_target_discovery 00:10:08.642 ************************************ 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:08.642 * Looking for test storage... 00:10:08.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:08.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.642 --rc genhtml_branch_coverage=1 00:10:08.642 --rc genhtml_function_coverage=1 00:10:08.642 --rc genhtml_legend=1 00:10:08.642 --rc geninfo_all_blocks=1 00:10:08.642 --rc geninfo_unexecuted_blocks=1 00:10:08.642 00:10:08.642 ' 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:08.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.642 --rc genhtml_branch_coverage=1 00:10:08.642 --rc genhtml_function_coverage=1 00:10:08.642 --rc genhtml_legend=1 00:10:08.642 --rc geninfo_all_blocks=1 00:10:08.642 --rc geninfo_unexecuted_blocks=1 00:10:08.642 00:10:08.642 ' 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:08.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.642 --rc genhtml_branch_coverage=1 00:10:08.642 --rc genhtml_function_coverage=1 00:10:08.642 --rc genhtml_legend=1 00:10:08.642 --rc geninfo_all_blocks=1 00:10:08.642 --rc geninfo_unexecuted_blocks=1 00:10:08.642 00:10:08.642 ' 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:08.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.642 --rc genhtml_branch_coverage=1 00:10:08.642 --rc genhtml_function_coverage=1 00:10:08.642 --rc genhtml_legend=1 00:10:08.642 --rc geninfo_all_blocks=1 00:10:08.642 --rc geninfo_unexecuted_blocks=1 00:10:08.642 00:10:08.642 ' 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.642 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:08.643 09:21:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:13.914 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:13.914 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:13.914 Found net devices under 0000:af:00.0: cvl_0_0 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:13.914 Found net devices under 0000:af:00.1: cvl_0_1 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:13.914 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.915 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.915 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:13.915 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:13.915 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.915 09:21:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:13.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:10:13.915 00:10:13.915 --- 10.0.0.2 ping statistics --- 00:10:13.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.915 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:10:13.915 00:10:13.915 --- 10.0.0.1 ping statistics --- 00:10:13.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.915 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3239799 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3239799 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3239799 ']' 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.915 09:21:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.915 [2024-12-13 09:21:26.249513] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:10:13.915 [2024-12-13 09:21:26.249555] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.174 [2024-12-13 09:21:26.319709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.174 [2024-12-13 09:21:26.364879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.174 [2024-12-13 09:21:26.364912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.174 [2024-12-13 09:21:26.364920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.174 [2024-12-13 09:21:26.364927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.174 [2024-12-13 09:21:26.364932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.174 [2024-12-13 09:21:26.366292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.174 [2024-12-13 09:21:26.366317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.174 [2024-12-13 09:21:26.366425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.174 [2024-12-13 09:21:26.366427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.740 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.740 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:14.740 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:14.740 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:14.740 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 [2024-12-13 09:21:27.121504] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 Null1 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 [2024-12-13 09:21:27.180572] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 Null2 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 Null3 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 Null4 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.999 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:15.000 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.000 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.000 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.000 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:15.258 00:10:15.258 Discovery Log Number of Records 6, Generation counter 6 00:10:15.258 =====Discovery Log Entry 0====== 00:10:15.258 trtype: tcp 00:10:15.258 adrfam: ipv4 00:10:15.258 subtype: current discovery subsystem 00:10:15.258 treq: not required 00:10:15.258 portid: 0 00:10:15.258 trsvcid: 4420 00:10:15.258 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:15.258 traddr: 10.0.0.2 00:10:15.258 eflags: explicit discovery connections, duplicate discovery information 00:10:15.258 sectype: none 00:10:15.258 =====Discovery Log Entry 1====== 00:10:15.258 trtype: tcp 00:10:15.258 adrfam: ipv4 00:10:15.258 subtype: nvme subsystem 00:10:15.258 treq: not required 00:10:15.258 portid: 0 00:10:15.258 trsvcid: 4420 00:10:15.258 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:15.258 traddr: 10.0.0.2 00:10:15.258 eflags: none 00:10:15.258 sectype: none 00:10:15.258 =====Discovery Log Entry 2====== 00:10:15.258 trtype: tcp 00:10:15.258 adrfam: ipv4 00:10:15.258 subtype: nvme subsystem 00:10:15.258 treq: not required 00:10:15.258 portid: 0 00:10:15.258 trsvcid: 4420 00:10:15.258 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:15.258 traddr: 10.0.0.2 00:10:15.258 eflags: none 00:10:15.258 sectype: none 00:10:15.258 =====Discovery Log Entry 3====== 00:10:15.258 trtype: tcp 00:10:15.258 adrfam: ipv4 00:10:15.258 subtype: nvme subsystem 00:10:15.258 treq: not required 00:10:15.258 portid: 0 00:10:15.258 trsvcid: 4420 00:10:15.258 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:15.258 traddr: 10.0.0.2 00:10:15.258 eflags: none 00:10:15.258 sectype: none 00:10:15.258 =====Discovery Log Entry 4====== 00:10:15.258 trtype: tcp 00:10:15.258 adrfam: ipv4 00:10:15.258 subtype: nvme subsystem 00:10:15.258 treq: not required 00:10:15.258 portid: 0 00:10:15.258 trsvcid: 4420 00:10:15.258 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:15.258 traddr: 10.0.0.2 00:10:15.258 eflags: none 00:10:15.258 sectype: none 00:10:15.258 =====Discovery Log Entry 5====== 00:10:15.258 trtype: tcp 00:10:15.258 adrfam: ipv4 00:10:15.258 subtype: discovery subsystem referral 00:10:15.258 treq: not required 00:10:15.258 portid: 0 00:10:15.258 trsvcid: 4430 00:10:15.258 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:15.258 traddr: 10.0.0.2 00:10:15.258 eflags: none 00:10:15.258 sectype: none 00:10:15.258 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:15.258 Perform nvmf subsystem discovery via RPC 00:10:15.258 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:15.258 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.258 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.258 [ 00:10:15.258 { 00:10:15.258 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:15.258 "subtype": "Discovery", 00:10:15.258 "listen_addresses": [ 00:10:15.258 { 00:10:15.258 "trtype": "TCP", 00:10:15.258 "adrfam": "IPv4", 00:10:15.258 "traddr": "10.0.0.2", 00:10:15.258 "trsvcid": "4420" 00:10:15.258 } 00:10:15.258 ], 00:10:15.258 "allow_any_host": true, 00:10:15.258 "hosts": [] 00:10:15.258 }, 00:10:15.258 { 00:10:15.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.258 "subtype": "NVMe", 00:10:15.259 "listen_addresses": [ 00:10:15.259 { 00:10:15.259 "trtype": "TCP", 00:10:15.259 "adrfam": "IPv4", 00:10:15.259 "traddr": "10.0.0.2", 00:10:15.259 "trsvcid": "4420" 00:10:15.259 } 00:10:15.259 ], 00:10:15.259 "allow_any_host": true, 00:10:15.259 "hosts": [], 00:10:15.259 "serial_number": "SPDK00000000000001", 00:10:15.259 "model_number": "SPDK bdev Controller", 00:10:15.259 "max_namespaces": 32, 00:10:15.259 "min_cntlid": 1, 00:10:15.259 "max_cntlid": 65519, 00:10:15.259 "namespaces": [ 00:10:15.259 { 00:10:15.259 "nsid": 1, 00:10:15.259 "bdev_name": "Null1", 00:10:15.259 "name": "Null1", 00:10:15.259 "nguid": "AB18B910A81645BFACBAB2386015F79A", 00:10:15.259 "uuid": "ab18b910-a816-45bf-acba-b2386015f79a" 00:10:15.259 } 00:10:15.259 ] 00:10:15.259 }, 00:10:15.259 { 00:10:15.259 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:15.259 "subtype": "NVMe", 00:10:15.259 "listen_addresses": [ 00:10:15.259 { 00:10:15.259 "trtype": "TCP", 00:10:15.259 "adrfam": "IPv4", 00:10:15.259 "traddr": "10.0.0.2", 00:10:15.259 "trsvcid": "4420" 00:10:15.259 } 00:10:15.259 ], 00:10:15.259 "allow_any_host": true, 00:10:15.259 "hosts": [], 00:10:15.259 "serial_number": "SPDK00000000000002", 00:10:15.259 "model_number": "SPDK bdev Controller", 00:10:15.259 "max_namespaces": 32, 00:10:15.259 "min_cntlid": 1, 00:10:15.259 "max_cntlid": 65519, 00:10:15.259 "namespaces": [ 00:10:15.259 { 00:10:15.259 "nsid": 1, 00:10:15.259 "bdev_name": "Null2", 00:10:15.259 "name": "Null2", 00:10:15.259 "nguid": "8AF025B48171416A861F8D408C69888E", 00:10:15.259 "uuid": "8af025b4-8171-416a-861f-8d408c69888e" 00:10:15.259 } 00:10:15.259 ] 00:10:15.259 }, 00:10:15.259 { 00:10:15.259 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:15.259 "subtype": "NVMe", 00:10:15.259 "listen_addresses": [ 00:10:15.259 { 00:10:15.259 "trtype": "TCP", 00:10:15.259 "adrfam": "IPv4", 00:10:15.259 "traddr": "10.0.0.2", 00:10:15.259 "trsvcid": "4420" 00:10:15.259 } 00:10:15.259 ], 00:10:15.259 "allow_any_host": true, 00:10:15.259 "hosts": [], 00:10:15.259 "serial_number": "SPDK00000000000003", 00:10:15.259 "model_number": "SPDK bdev Controller", 00:10:15.259 "max_namespaces": 32, 00:10:15.259 "min_cntlid": 1, 00:10:15.259 "max_cntlid": 65519, 00:10:15.259 "namespaces": [ 00:10:15.259 { 00:10:15.259 "nsid": 1, 00:10:15.259 "bdev_name": "Null3", 00:10:15.259 "name": "Null3", 00:10:15.259 "nguid": "D8194ED32FAF4869A7AEF7620663C4EC", 00:10:15.259 "uuid": "d8194ed3-2faf-4869-a7ae-f7620663c4ec" 00:10:15.259 } 00:10:15.259 ] 00:10:15.259 }, 00:10:15.259 { 00:10:15.259 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:15.259 "subtype": "NVMe", 00:10:15.259 "listen_addresses": [ 00:10:15.259 { 00:10:15.259 "trtype": "TCP", 00:10:15.259 "adrfam": "IPv4", 00:10:15.259 "traddr": "10.0.0.2", 00:10:15.259 "trsvcid": "4420" 00:10:15.259 } 00:10:15.259 ], 00:10:15.259 "allow_any_host": true, 00:10:15.259 "hosts": [], 00:10:15.259 "serial_number": "SPDK00000000000004", 00:10:15.259 "model_number": "SPDK bdev Controller", 00:10:15.259 "max_namespaces": 32, 00:10:15.259 "min_cntlid": 1, 00:10:15.259 "max_cntlid": 65519, 00:10:15.259 "namespaces": [ 00:10:15.259 { 00:10:15.259 "nsid": 1, 00:10:15.259 "bdev_name": "Null4", 00:10:15.259 "name": "Null4", 00:10:15.259 "nguid": "38DEBA0B6DD14408B1C09EB84AAA2D4A", 00:10:15.259 "uuid": "38deba0b-6dd1-4408-b1c0-9eb84aaa2d4a" 00:10:15.259 } 00:10:15.259 ] 00:10:15.259 } 00:10:15.259 ] 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.259 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:15.518 rmmod nvme_tcp 00:10:15.518 rmmod nvme_fabrics 00:10:15.518 rmmod nvme_keyring 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3239799 ']' 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3239799 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3239799 ']' 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3239799 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3239799 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3239799' 00:10:15.518 killing process with pid 3239799 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3239799 00:10:15.518 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3239799 00:10:15.777 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.777 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:15.777 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:15.777 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:15.777 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:15.777 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:15.777 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:15.777 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.777 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:15.777 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.777 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.777 09:21:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.680 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.680 00:10:17.680 real 0m9.505s 00:10:17.680 user 0m7.946s 00:10:17.680 sys 0m4.585s 00:10:17.680 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.680 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.680 ************************************ 00:10:17.680 END TEST nvmf_target_discovery 00:10:17.680 ************************************ 00:10:17.680 09:21:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:17.680 09:21:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.680 09:21:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.680 09:21:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:17.680 ************************************ 00:10:17.680 START TEST nvmf_referrals 00:10:17.680 ************************************ 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:17.939 * Looking for test storage... 00:10:17.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:17.939 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:17.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.940 --rc genhtml_branch_coverage=1 00:10:17.940 --rc genhtml_function_coverage=1 00:10:17.940 --rc genhtml_legend=1 00:10:17.940 --rc geninfo_all_blocks=1 00:10:17.940 --rc geninfo_unexecuted_blocks=1 00:10:17.940 00:10:17.940 ' 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:17.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.940 --rc genhtml_branch_coverage=1 00:10:17.940 --rc genhtml_function_coverage=1 00:10:17.940 --rc genhtml_legend=1 00:10:17.940 --rc geninfo_all_blocks=1 00:10:17.940 --rc geninfo_unexecuted_blocks=1 00:10:17.940 00:10:17.940 ' 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:17.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.940 --rc genhtml_branch_coverage=1 00:10:17.940 --rc genhtml_function_coverage=1 00:10:17.940 --rc genhtml_legend=1 00:10:17.940 --rc geninfo_all_blocks=1 00:10:17.940 --rc geninfo_unexecuted_blocks=1 00:10:17.940 00:10:17.940 ' 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:17.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.940 --rc genhtml_branch_coverage=1 00:10:17.940 --rc genhtml_function_coverage=1 00:10:17.940 --rc genhtml_legend=1 00:10:17.940 --rc geninfo_all_blocks=1 00:10:17.940 --rc geninfo_unexecuted_blocks=1 00:10:17.940 00:10:17.940 ' 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:17.940 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:23.206 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:23.206 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:23.207 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:23.207 Found net devices under 0000:af:00.0: cvl_0_0 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:23.207 Found net devices under 0000:af:00.1: cvl_0_1 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:23.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:10:23.207 00:10:23.207 --- 10.0.0.2 ping statistics --- 00:10:23.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.207 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:23.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:10:23.207 00:10:23.207 --- 10.0.0.1 ping statistics --- 00:10:23.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.207 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3243513 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3243513 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3243513 ']' 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.207 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.207 [2024-12-13 09:21:35.556429] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:10:23.207 [2024-12-13 09:21:35.556480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.466 [2024-12-13 09:21:35.624453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.466 [2024-12-13 09:21:35.666438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.466 [2024-12-13 09:21:35.666477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.466 [2024-12-13 09:21:35.666485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.466 [2024-12-13 09:21:35.666494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.466 [2024-12-13 09:21:35.666498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.466 [2024-12-13 09:21:35.667835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.466 [2024-12-13 09:21:35.667855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.466 [2024-12-13 09:21:35.667943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.466 [2024-12-13 09:21:35.667944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.466 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.466 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:23.466 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:23.466 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:23.466 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.466 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.466 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:23.466 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.466 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.466 [2024-12-13 09:21:35.805964] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.466 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.466 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:23.466 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.466 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.724 [2024-12-13 09:21:35.833612] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.724 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:23.725 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:23.983 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:24.241 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:24.499 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:24.499 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:24.499 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:24.499 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:24.499 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:24.499 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:24.499 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:24.500 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:24.500 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:24.500 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:24.500 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:24.757 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:24.757 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:24.757 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.757 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:24.757 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:24.758 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:25.015 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:25.015 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:25.016 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:25.016 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:25.016 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:25.016 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:25.016 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:25.016 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:25.016 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:25.016 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:25.016 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:25.016 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:25.016 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:25.274 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:25.553 rmmod nvme_tcp 00:10:25.553 rmmod nvme_fabrics 00:10:25.553 rmmod nvme_keyring 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3243513 ']' 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3243513 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3243513 ']' 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3243513 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3243513 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3243513' 00:10:25.553 killing process with pid 3243513 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3243513 00:10:25.553 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3243513 00:10:25.811 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:25.811 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:25.811 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:25.811 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:25.811 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:25.811 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:25.811 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:25.811 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:25.811 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:25.811 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.811 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.811 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.713 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:27.713 00:10:27.713 real 0m10.020s 00:10:27.713 user 0m11.516s 00:10:27.713 sys 0m4.783s 00:10:27.713 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.713 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.713 ************************************ 00:10:27.713 END TEST nvmf_referrals 00:10:27.713 ************************************ 00:10:27.970 09:21:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:27.970 09:21:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:27.970 09:21:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.970 09:21:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:27.970 ************************************ 00:10:27.970 START TEST nvmf_connect_disconnect 00:10:27.970 ************************************ 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:27.971 * Looking for test storage... 00:10:27.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:27.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.971 --rc genhtml_branch_coverage=1 00:10:27.971 --rc genhtml_function_coverage=1 00:10:27.971 --rc genhtml_legend=1 00:10:27.971 --rc geninfo_all_blocks=1 00:10:27.971 --rc geninfo_unexecuted_blocks=1 00:10:27.971 00:10:27.971 ' 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:27.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.971 --rc genhtml_branch_coverage=1 00:10:27.971 --rc genhtml_function_coverage=1 00:10:27.971 --rc genhtml_legend=1 00:10:27.971 --rc geninfo_all_blocks=1 00:10:27.971 --rc geninfo_unexecuted_blocks=1 00:10:27.971 00:10:27.971 ' 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:27.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.971 --rc genhtml_branch_coverage=1 00:10:27.971 --rc genhtml_function_coverage=1 00:10:27.971 --rc genhtml_legend=1 00:10:27.971 --rc geninfo_all_blocks=1 00:10:27.971 --rc geninfo_unexecuted_blocks=1 00:10:27.971 00:10:27.971 ' 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:27.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.971 --rc genhtml_branch_coverage=1 00:10:27.971 --rc genhtml_function_coverage=1 00:10:27.971 --rc genhtml_legend=1 00:10:27.971 --rc geninfo_all_blocks=1 00:10:27.971 --rc geninfo_unexecuted_blocks=1 00:10:27.971 00:10:27.971 ' 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.971 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.229 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:28.230 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:33.495 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:33.495 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:33.495 Found net devices under 0000:af:00.0: cvl_0_0 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:33.495 Found net devices under 0000:af:00.1: cvl_0_1 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:33.495 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:33.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:10:33.496 00:10:33.496 --- 10.0.0.2 ping statistics --- 00:10:33.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.496 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:33.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:10:33.496 00:10:33.496 --- 10.0.0.1 ping statistics --- 00:10:33.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.496 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3247510 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3247510 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3247510 ']' 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.496 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:33.753 [2024-12-13 09:21:45.896927] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:10:33.753 [2024-12-13 09:21:45.896975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.753 [2024-12-13 09:21:45.964282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.753 [2024-12-13 09:21:46.006305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.753 [2024-12-13 09:21:46.006341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.753 [2024-12-13 09:21:46.006348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.753 [2024-12-13 09:21:46.006355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.753 [2024-12-13 09:21:46.006360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.753 [2024-12-13 09:21:46.007807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.753 [2024-12-13 09:21:46.007909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.753 [2024-12-13 09:21:46.008016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.753 [2024-12-13 09:21:46.008017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.753 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.753 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:33.753 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:33.753 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:33.753 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.011 [2024-12-13 09:21:46.146231] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.011 [2024-12-13 09:21:46.212190] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:34.011 09:21:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:37.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.449 rmmod nvme_tcp 00:10:50.449 rmmod nvme_fabrics 00:10:50.449 rmmod nvme_keyring 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3247510 ']' 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3247510 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3247510 ']' 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3247510 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3247510 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.449 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.450 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3247510' 00:10:50.450 killing process with pid 3247510 00:10:50.450 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3247510 00:10:50.450 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3247510 00:10:50.450 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:50.450 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:50.450 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:50.450 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:50.450 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:50.450 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:50.450 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:50.450 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.450 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:50.450 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.450 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.450 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.979 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:52.979 00:10:52.979 real 0m24.666s 00:10:52.979 user 1m7.986s 00:10:52.979 sys 0m5.487s 00:10:52.979 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.979 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:52.979 ************************************ 00:10:52.979 END TEST nvmf_connect_disconnect 00:10:52.979 ************************************ 00:10:52.979 09:22:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:52.979 09:22:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.979 09:22:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.979 09:22:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:52.979 ************************************ 00:10:52.979 START TEST nvmf_multitarget 00:10:52.979 ************************************ 00:10:52.979 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:52.979 * Looking for test storage... 00:10:52.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.979 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:52.979 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:10:52.979 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.979 --rc genhtml_branch_coverage=1 00:10:52.979 --rc genhtml_function_coverage=1 00:10:52.979 --rc genhtml_legend=1 00:10:52.979 --rc geninfo_all_blocks=1 00:10:52.979 --rc geninfo_unexecuted_blocks=1 00:10:52.979 00:10:52.979 ' 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.979 --rc genhtml_branch_coverage=1 00:10:52.979 --rc genhtml_function_coverage=1 00:10:52.979 --rc genhtml_legend=1 00:10:52.979 --rc geninfo_all_blocks=1 00:10:52.979 --rc geninfo_unexecuted_blocks=1 00:10:52.979 00:10:52.979 ' 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.979 --rc genhtml_branch_coverage=1 00:10:52.979 --rc genhtml_function_coverage=1 00:10:52.979 --rc genhtml_legend=1 00:10:52.979 --rc geninfo_all_blocks=1 00:10:52.979 --rc geninfo_unexecuted_blocks=1 00:10:52.979 00:10:52.979 ' 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.979 --rc genhtml_branch_coverage=1 00:10:52.979 --rc genhtml_function_coverage=1 00:10:52.979 --rc genhtml_legend=1 00:10:52.979 --rc geninfo_all_blocks=1 00:10:52.979 --rc geninfo_unexecuted_blocks=1 00:10:52.979 00:10:52.979 ' 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:52.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:52.979 09:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:58.244 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:58.244 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:58.244 Found net devices under 0000:af:00.0: cvl_0_0 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:58.244 Found net devices under 0000:af:00.1: cvl_0_1 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.244 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:58.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:10:58.503 00:10:58.503 --- 10.0.0.2 ping statistics --- 00:10:58.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.503 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:10:58.503 00:10:58.503 --- 10.0.0.1 ping statistics --- 00:10:58.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.503 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3253773 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3253773 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3253773 ']' 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.503 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:58.503 [2024-12-13 09:22:10.836764] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:10:58.503 [2024-12-13 09:22:10.836812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.760 [2024-12-13 09:22:10.903690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.760 [2024-12-13 09:22:10.945615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.760 [2024-12-13 09:22:10.945655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.760 [2024-12-13 09:22:10.945663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.760 [2024-12-13 09:22:10.945670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.760 [2024-12-13 09:22:10.945675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.760 [2024-12-13 09:22:10.947158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.761 [2024-12-13 09:22:10.947253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.761 [2024-12-13 09:22:10.947339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.761 [2024-12-13 09:22:10.947340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.761 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.761 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:58.761 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:58.761 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:58.761 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:58.761 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.761 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:58.761 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:58.761 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:59.016 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:59.016 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:59.016 "nvmf_tgt_1" 00:10:59.017 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:59.273 "nvmf_tgt_2" 00:10:59.273 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:59.273 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:59.273 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:59.273 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:59.273 true 00:10:59.273 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:59.530 true 00:10:59.530 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:59.530 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:59.530 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:59.530 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:59.530 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:59.530 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:59.530 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:59.530 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:59.530 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:59.530 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.530 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:59.530 rmmod nvme_tcp 00:10:59.530 rmmod nvme_fabrics 00:10:59.530 rmmod nvme_keyring 00:10:59.789 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.789 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:59.789 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:59.789 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3253773 ']' 00:10:59.789 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3253773 00:10:59.789 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3253773 ']' 00:10:59.789 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3253773 00:10:59.789 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:59.789 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.789 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3253773 00:10:59.789 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.789 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.789 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3253773' 00:10:59.789 killing process with pid 3253773 00:10:59.789 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3253773 00:10:59.789 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3253773 00:10:59.789 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:59.789 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:59.789 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:59.789 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:10:59.789 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:10:59.789 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:10:59.789 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:59.789 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.789 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:59.789 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.789 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.789 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:02.320 00:11:02.320 real 0m9.327s 00:11:02.320 user 0m7.222s 00:11:02.320 sys 0m4.667s 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:02.320 ************************************ 00:11:02.320 END TEST nvmf_multitarget 00:11:02.320 ************************************ 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.320 ************************************ 00:11:02.320 START TEST nvmf_rpc 00:11:02.320 ************************************ 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:02.320 * Looking for test storage... 00:11:02.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.320 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:02.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.320 --rc genhtml_branch_coverage=1 00:11:02.320 --rc genhtml_function_coverage=1 00:11:02.320 --rc genhtml_legend=1 00:11:02.320 --rc geninfo_all_blocks=1 00:11:02.320 --rc geninfo_unexecuted_blocks=1 00:11:02.321 00:11:02.321 ' 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:02.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.321 --rc genhtml_branch_coverage=1 00:11:02.321 --rc genhtml_function_coverage=1 00:11:02.321 --rc genhtml_legend=1 00:11:02.321 --rc geninfo_all_blocks=1 00:11:02.321 --rc geninfo_unexecuted_blocks=1 00:11:02.321 00:11:02.321 ' 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:02.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.321 --rc genhtml_branch_coverage=1 00:11:02.321 --rc genhtml_function_coverage=1 00:11:02.321 --rc genhtml_legend=1 00:11:02.321 --rc geninfo_all_blocks=1 00:11:02.321 --rc geninfo_unexecuted_blocks=1 00:11:02.321 00:11:02.321 ' 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:02.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.321 --rc genhtml_branch_coverage=1 00:11:02.321 --rc genhtml_function_coverage=1 00:11:02.321 --rc genhtml_legend=1 00:11:02.321 --rc geninfo_all_blocks=1 00:11:02.321 --rc geninfo_unexecuted_blocks=1 00:11:02.321 00:11:02.321 ' 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:02.321 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.595 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:07.596 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:07.596 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:07.596 Found net devices under 0000:af:00.0: cvl_0_0 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:07.596 Found net devices under 0000:af:00.1: cvl_0_1 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:07.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:11:07.596 00:11:07.596 --- 10.0.0.2 ping statistics --- 00:11:07.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.596 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:11:07.596 00:11:07.596 --- 10.0.0.1 ping statistics --- 00:11:07.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.596 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3257290 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3257290 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3257290 ']' 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.596 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.596 [2024-12-13 09:22:19.776044] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:11:07.597 [2024-12-13 09:22:19.776094] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.597 [2024-12-13 09:22:19.843951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.597 [2024-12-13 09:22:19.889149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.597 [2024-12-13 09:22:19.889186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.597 [2024-12-13 09:22:19.889194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.597 [2024-12-13 09:22:19.889200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.597 [2024-12-13 09:22:19.889205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.597 [2024-12-13 09:22:19.890510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.597 [2024-12-13 09:22:19.890629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.597 [2024-12-13 09:22:19.890729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.597 [2024-12-13 09:22:19.890730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.855 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.855 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:07.855 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:07.855 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.855 09:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:07.855 "tick_rate": 2100000000, 00:11:07.855 "poll_groups": [ 00:11:07.855 { 00:11:07.855 "name": "nvmf_tgt_poll_group_000", 00:11:07.855 "admin_qpairs": 0, 00:11:07.855 "io_qpairs": 0, 00:11:07.855 "current_admin_qpairs": 0, 00:11:07.855 "current_io_qpairs": 0, 00:11:07.855 "pending_bdev_io": 0, 00:11:07.855 "completed_nvme_io": 0, 00:11:07.855 "transports": [] 00:11:07.855 }, 00:11:07.855 { 00:11:07.855 "name": "nvmf_tgt_poll_group_001", 00:11:07.855 "admin_qpairs": 0, 00:11:07.855 "io_qpairs": 0, 00:11:07.855 "current_admin_qpairs": 0, 00:11:07.855 "current_io_qpairs": 0, 00:11:07.855 "pending_bdev_io": 0, 00:11:07.855 "completed_nvme_io": 0, 00:11:07.855 "transports": [] 00:11:07.855 }, 00:11:07.855 { 00:11:07.855 "name": "nvmf_tgt_poll_group_002", 00:11:07.855 "admin_qpairs": 0, 00:11:07.855 "io_qpairs": 0, 00:11:07.855 "current_admin_qpairs": 0, 00:11:07.855 "current_io_qpairs": 0, 00:11:07.855 "pending_bdev_io": 0, 00:11:07.855 "completed_nvme_io": 0, 00:11:07.855 "transports": [] 00:11:07.855 }, 00:11:07.855 { 00:11:07.855 "name": "nvmf_tgt_poll_group_003", 00:11:07.855 "admin_qpairs": 0, 00:11:07.855 "io_qpairs": 0, 00:11:07.855 "current_admin_qpairs": 0, 00:11:07.855 "current_io_qpairs": 0, 00:11:07.855 "pending_bdev_io": 0, 00:11:07.855 "completed_nvme_io": 0, 00:11:07.855 "transports": [] 00:11:07.855 } 00:11:07.855 ] 00:11:07.855 }' 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.855 [2024-12-13 09:22:20.141370] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:07.855 "tick_rate": 2100000000, 00:11:07.855 "poll_groups": [ 00:11:07.855 { 00:11:07.855 "name": "nvmf_tgt_poll_group_000", 00:11:07.855 "admin_qpairs": 0, 00:11:07.855 "io_qpairs": 0, 00:11:07.855 "current_admin_qpairs": 0, 00:11:07.855 "current_io_qpairs": 0, 00:11:07.855 "pending_bdev_io": 0, 00:11:07.855 "completed_nvme_io": 0, 00:11:07.855 "transports": [ 00:11:07.855 { 00:11:07.855 "trtype": "TCP" 00:11:07.855 } 00:11:07.855 ] 00:11:07.855 }, 00:11:07.855 { 00:11:07.855 "name": "nvmf_tgt_poll_group_001", 00:11:07.855 "admin_qpairs": 0, 00:11:07.855 "io_qpairs": 0, 00:11:07.855 "current_admin_qpairs": 0, 00:11:07.855 "current_io_qpairs": 0, 00:11:07.855 "pending_bdev_io": 0, 00:11:07.855 "completed_nvme_io": 0, 00:11:07.855 "transports": [ 00:11:07.855 { 00:11:07.855 "trtype": "TCP" 00:11:07.855 } 00:11:07.855 ] 00:11:07.855 }, 00:11:07.855 { 00:11:07.855 "name": "nvmf_tgt_poll_group_002", 00:11:07.855 "admin_qpairs": 0, 00:11:07.855 "io_qpairs": 0, 00:11:07.855 "current_admin_qpairs": 0, 00:11:07.855 "current_io_qpairs": 0, 00:11:07.855 "pending_bdev_io": 0, 00:11:07.855 "completed_nvme_io": 0, 00:11:07.855 "transports": [ 00:11:07.855 { 00:11:07.855 "trtype": "TCP" 00:11:07.855 } 00:11:07.855 ] 00:11:07.855 }, 00:11:07.855 { 00:11:07.855 "name": "nvmf_tgt_poll_group_003", 00:11:07.855 "admin_qpairs": 0, 00:11:07.855 "io_qpairs": 0, 00:11:07.855 "current_admin_qpairs": 0, 00:11:07.855 "current_io_qpairs": 0, 00:11:07.855 "pending_bdev_io": 0, 00:11:07.855 "completed_nvme_io": 0, 00:11:07.855 "transports": [ 00:11:07.855 { 00:11:07.855 "trtype": "TCP" 00:11:07.855 } 00:11:07.855 ] 00:11:07.855 } 00:11:07.855 ] 00:11:07.855 }' 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:07.855 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.114 Malloc1 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.114 [2024-12-13 09:22:20.321536] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:08.114 [2024-12-13 09:22:20.356267] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:11:08.114 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:08.114 could not add new controller: failed to write to nvme-fabrics device 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.114 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:09.487 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:09.487 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:09.487 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.487 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:09.487 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:11.386 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.386 [2024-12-13 09:22:23.729550] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:11:11.644 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:11.644 could not add new controller: failed to write to nvme-fabrics device 00:11:11.644 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:11.644 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:11.644 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:11.644 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:11.644 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:11.644 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.644 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.644 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.644 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:12.577 09:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:12.577 09:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:12.577 09:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.577 09:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:12.577 09:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:15.214 09:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:15.214 09:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:15.214 09:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.214 09:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:15.214 09:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.214 09:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:15.214 09:22:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.214 [2024-12-13 09:22:27.137931] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.214 09:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:16.180 09:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.180 09:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:16.180 09:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.180 09:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:16.180 09:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:18.079 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:18.079 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:18.079 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.079 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:18.079 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.079 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:18.079 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.079 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:18.079 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:18.079 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:18.079 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.079 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:18.080 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.338 [2024-12-13 09:22:30.491883] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.338 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:19.272 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:19.272 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:19.272 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:19.272 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:19.272 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.799 [2024-12-13 09:22:33.865093] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.799 09:22:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.733 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:22.733 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:22.733 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.733 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:22.733 09:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:24.629 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:24.629 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:24.629 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:24.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.887 [2024-12-13 09:22:37.127048] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:24.887 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.888 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.888 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.888 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:26.260 09:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:26.260 09:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:26.260 09:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.260 09:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:26.260 09:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.159 [2024-12-13 09:22:40.473719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.159 09:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.533 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:29.533 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:29.533 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.533 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:29.533 09:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.432 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.691 [2024-12-13 09:22:43.830508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.691 [2024-12-13 09:22:43.878601] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.691 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 [2024-12-13 09:22:43.926713] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 [2024-12-13 09:22:43.974904] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 [2024-12-13 09:22:44.023073] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:31.951 "tick_rate": 2100000000, 00:11:31.951 "poll_groups": [ 00:11:31.951 { 00:11:31.951 "name": "nvmf_tgt_poll_group_000", 00:11:31.951 "admin_qpairs": 2, 00:11:31.951 "io_qpairs": 168, 00:11:31.951 "current_admin_qpairs": 0, 00:11:31.951 "current_io_qpairs": 0, 00:11:31.951 "pending_bdev_io": 0, 00:11:31.951 "completed_nvme_io": 267, 00:11:31.951 "transports": [ 00:11:31.951 { 00:11:31.951 "trtype": "TCP" 00:11:31.951 } 00:11:31.951 ] 00:11:31.951 }, 00:11:31.951 { 00:11:31.951 "name": "nvmf_tgt_poll_group_001", 00:11:31.951 "admin_qpairs": 2, 00:11:31.951 "io_qpairs": 168, 00:11:31.951 "current_admin_qpairs": 0, 00:11:31.951 "current_io_qpairs": 0, 00:11:31.951 "pending_bdev_io": 0, 00:11:31.951 "completed_nvme_io": 218, 00:11:31.951 "transports": [ 00:11:31.951 { 00:11:31.951 "trtype": "TCP" 00:11:31.951 } 00:11:31.951 ] 00:11:31.951 }, 00:11:31.951 { 00:11:31.951 "name": "nvmf_tgt_poll_group_002", 00:11:31.951 "admin_qpairs": 1, 00:11:31.951 "io_qpairs": 168, 00:11:31.951 "current_admin_qpairs": 0, 00:11:31.951 "current_io_qpairs": 0, 00:11:31.951 "pending_bdev_io": 0, 00:11:31.951 "completed_nvme_io": 316, 00:11:31.951 "transports": [ 00:11:31.951 { 00:11:31.951 "trtype": "TCP" 00:11:31.951 } 00:11:31.951 ] 00:11:31.951 }, 00:11:31.951 { 00:11:31.951 "name": "nvmf_tgt_poll_group_003", 00:11:31.951 "admin_qpairs": 2, 00:11:31.951 "io_qpairs": 168, 00:11:31.951 "current_admin_qpairs": 0, 00:11:31.951 "current_io_qpairs": 0, 00:11:31.951 "pending_bdev_io": 0, 00:11:31.951 "completed_nvme_io": 221, 00:11:31.951 "transports": [ 00:11:31.951 { 00:11:31.951 "trtype": "TCP" 00:11:31.951 } 00:11:31.951 ] 00:11:31.951 } 00:11:31.951 ] 00:11:31.951 }' 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.951 rmmod nvme_tcp 00:11:31.951 rmmod nvme_fabrics 00:11:31.951 rmmod nvme_keyring 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3257290 ']' 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3257290 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3257290 ']' 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3257290 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3257290 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3257290' 00:11:31.951 killing process with pid 3257290 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3257290 00:11:31.951 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3257290 00:11:32.210 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:32.210 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:32.210 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:32.210 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:32.210 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:32.210 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:32.210 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:32.210 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.210 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.210 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.210 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.210 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:34.741 00:11:34.741 real 0m32.257s 00:11:34.741 user 1m39.346s 00:11:34.741 sys 0m5.991s 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.741 ************************************ 00:11:34.741 END TEST nvmf_rpc 00:11:34.741 ************************************ 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.741 ************************************ 00:11:34.741 START TEST nvmf_invalid 00:11:34.741 ************************************ 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:34.741 * Looking for test storage... 00:11:34.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.741 --rc genhtml_branch_coverage=1 00:11:34.741 --rc genhtml_function_coverage=1 00:11:34.741 --rc genhtml_legend=1 00:11:34.741 --rc geninfo_all_blocks=1 00:11:34.741 --rc geninfo_unexecuted_blocks=1 00:11:34.741 00:11:34.741 ' 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.741 --rc genhtml_branch_coverage=1 00:11:34.741 --rc genhtml_function_coverage=1 00:11:34.741 --rc genhtml_legend=1 00:11:34.741 --rc geninfo_all_blocks=1 00:11:34.741 --rc geninfo_unexecuted_blocks=1 00:11:34.741 00:11:34.741 ' 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.741 --rc genhtml_branch_coverage=1 00:11:34.741 --rc genhtml_function_coverage=1 00:11:34.741 --rc genhtml_legend=1 00:11:34.741 --rc geninfo_all_blocks=1 00:11:34.741 --rc geninfo_unexecuted_blocks=1 00:11:34.741 00:11:34.741 ' 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.741 --rc genhtml_branch_coverage=1 00:11:34.741 --rc genhtml_function_coverage=1 00:11:34.741 --rc genhtml_legend=1 00:11:34.741 --rc geninfo_all_blocks=1 00:11:34.741 --rc geninfo_unexecuted_blocks=1 00:11:34.741 00:11:34.741 ' 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.741 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.742 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.000 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:40.001 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:40.001 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:40.001 Found net devices under 0000:af:00.0: cvl_0_0 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:40.001 Found net devices under 0000:af:00.1: cvl_0_1 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.001 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:40.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:11:40.001 00:11:40.001 --- 10.0.0.2 ping statistics --- 00:11:40.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.001 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:11:40.001 00:11:40.001 --- 10.0.0.1 ping statistics --- 00:11:40.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.001 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3264946 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3264946 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3264946 ']' 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.001 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:40.001 [2024-12-13 09:22:52.258310] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:11:40.001 [2024-12-13 09:22:52.258353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.001 [2024-12-13 09:22:52.324475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.001 [2024-12-13 09:22:52.366497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.001 [2024-12-13 09:22:52.366532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.001 [2024-12-13 09:22:52.366539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.001 [2024-12-13 09:22:52.366545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.002 [2024-12-13 09:22:52.366551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.258 [2024-12-13 09:22:52.368017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.258 [2024-12-13 09:22:52.368036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.258 [2024-12-13 09:22:52.368059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.258 [2024-12-13 09:22:52.368061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.258 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.258 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:40.258 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:40.258 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:40.258 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:40.258 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.258 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:40.258 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23330 00:11:40.515 [2024-12-13 09:22:52.670752] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:40.515 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:40.515 { 00:11:40.515 "nqn": "nqn.2016-06.io.spdk:cnode23330", 00:11:40.515 "tgt_name": "foobar", 00:11:40.515 "method": "nvmf_create_subsystem", 00:11:40.515 "req_id": 1 00:11:40.515 } 00:11:40.515 Got JSON-RPC error response 00:11:40.515 response: 00:11:40.515 { 00:11:40.515 "code": -32603, 00:11:40.515 "message": "Unable to find target foobar" 00:11:40.515 }' 00:11:40.515 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:40.515 { 00:11:40.515 "nqn": "nqn.2016-06.io.spdk:cnode23330", 00:11:40.515 "tgt_name": "foobar", 00:11:40.515 "method": "nvmf_create_subsystem", 00:11:40.515 "req_id": 1 00:11:40.515 } 00:11:40.515 Got JSON-RPC error response 00:11:40.515 response: 00:11:40.515 { 00:11:40.515 "code": -32603, 00:11:40.515 "message": "Unable to find target foobar" 00:11:40.515 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:40.515 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:40.515 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9205 00:11:40.772 [2024-12-13 09:22:52.891521] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9205: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:40.772 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:40.772 { 00:11:40.772 "nqn": "nqn.2016-06.io.spdk:cnode9205", 00:11:40.772 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:40.772 "method": "nvmf_create_subsystem", 00:11:40.772 "req_id": 1 00:11:40.772 } 00:11:40.772 Got JSON-RPC error response 00:11:40.772 response: 00:11:40.772 { 00:11:40.772 "code": -32602, 00:11:40.772 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:40.772 }' 00:11:40.772 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:40.772 { 00:11:40.772 "nqn": "nqn.2016-06.io.spdk:cnode9205", 00:11:40.772 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:40.772 "method": "nvmf_create_subsystem", 00:11:40.772 "req_id": 1 00:11:40.772 } 00:11:40.772 Got JSON-RPC error response 00:11:40.772 response: 00:11:40.772 { 00:11:40.772 "code": -32602, 00:11:40.772 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:40.772 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:40.772 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:40.772 09:22:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15034 00:11:40.772 [2024-12-13 09:22:53.104194] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15034: invalid model number 'SPDK_Controller' 00:11:40.772 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:40.772 { 00:11:40.772 "nqn": "nqn.2016-06.io.spdk:cnode15034", 00:11:40.772 "model_number": "SPDK_Controller\u001f", 00:11:40.772 "method": "nvmf_create_subsystem", 00:11:40.772 "req_id": 1 00:11:40.772 } 00:11:40.772 Got JSON-RPC error response 00:11:40.772 response: 00:11:40.772 { 00:11:40.772 "code": -32602, 00:11:40.772 "message": "Invalid MN SPDK_Controller\u001f" 00:11:40.772 }' 00:11:40.772 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:40.772 { 00:11:40.772 "nqn": "nqn.2016-06.io.spdk:cnode15034", 00:11:40.772 "model_number": "SPDK_Controller\u001f", 00:11:40.772 "method": "nvmf_create_subsystem", 00:11:40.772 "req_id": 1 00:11:40.772 } 00:11:40.772 Got JSON-RPC error response 00:11:40.772 response: 00:11:40.772 { 00:11:40.772 "code": -32602, 00:11:40.772 "message": "Invalid MN SPDK_Controller\u001f" 00:11:40.772 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:40.772 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:41.029 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:41.029 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:41.029 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:41.029 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:41.029 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:41.029 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.029 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:41.029 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:41.029 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:41.029 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.029 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.029 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:41.029 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ e == \- ]] 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'e8sXdzU\B63L^OJ.QTqW' 00:11:41.030 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'e8sXdzU\B63L^OJ.QTqW' nqn.2016-06.io.spdk:cnode18027 00:11:41.288 [2024-12-13 09:22:53.453361] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18027: invalid serial number 'e8sXdzU\B63L^OJ.QTqW' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:41.288 { 00:11:41.288 "nqn": "nqn.2016-06.io.spdk:cnode18027", 00:11:41.288 "serial_number": "e\u007f8sXdzU\\B63L^OJ.QTqW", 00:11:41.288 "method": "nvmf_create_subsystem", 00:11:41.288 "req_id": 1 00:11:41.288 } 00:11:41.288 Got JSON-RPC error response 00:11:41.288 response: 00:11:41.288 { 00:11:41.288 "code": -32602, 00:11:41.288 "message": "Invalid SN e\u007f8sXdzU\\B63L^OJ.QTqW" 00:11:41.288 }' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:41.288 { 00:11:41.288 "nqn": "nqn.2016-06.io.spdk:cnode18027", 00:11:41.288 "serial_number": "e\u007f8sXdzU\\B63L^OJ.QTqW", 00:11:41.288 "method": "nvmf_create_subsystem", 00:11:41.288 "req_id": 1 00:11:41.288 } 00:11:41.288 Got JSON-RPC error response 00:11:41.288 response: 00:11:41.288 { 00:11:41.288 "code": -32602, 00:11:41.288 "message": "Invalid SN e\u007f8sXdzU\\B63L^OJ.QTqW" 00:11:41.288 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:41.288 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:41.289 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.545 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ! == \- ]] 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '!V6XcL}}[-6,WvQ t&bXl`8U2O^8Sc)2C~ )H&_M' 00:11:41.546 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '!V6XcL}}[-6,WvQ t&bXl`8U2O^8Sc)2C~ )H&_M' nqn.2016-06.io.spdk:cnode16442 00:11:41.802 [2024-12-13 09:22:53.922904] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16442: invalid model number '!V6XcL}}[-6,WvQ t&bXl`8U2O^8Sc)2C~ )H&_M' 00:11:41.802 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:41.802 { 00:11:41.802 "nqn": "nqn.2016-06.io.spdk:cnode16442", 00:11:41.802 "model_number": "!V6XcL}}[-6,WvQ t&bXl`8U2O\u007f^8Sc)2C~ )H&_M", 00:11:41.802 "method": "nvmf_create_subsystem", 00:11:41.802 "req_id": 1 00:11:41.802 } 00:11:41.802 Got JSON-RPC error response 00:11:41.802 response: 00:11:41.802 { 00:11:41.802 "code": -32602, 00:11:41.802 "message": "Invalid MN !V6XcL}}[-6,WvQ t&bXl`8U2O\u007f^8Sc)2C~ )H&_M" 00:11:41.802 }' 00:11:41.802 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:41.802 { 00:11:41.802 "nqn": "nqn.2016-06.io.spdk:cnode16442", 00:11:41.802 "model_number": "!V6XcL}}[-6,WvQ t&bXl`8U2O\u007f^8Sc)2C~ )H&_M", 00:11:41.802 "method": "nvmf_create_subsystem", 00:11:41.802 "req_id": 1 00:11:41.802 } 00:11:41.802 Got JSON-RPC error response 00:11:41.802 response: 00:11:41.802 { 00:11:41.802 "code": -32602, 00:11:41.802 "message": "Invalid MN !V6XcL}}[-6,WvQ t&bXl`8U2O\u007f^8Sc)2C~ )H&_M" 00:11:41.802 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:41.802 09:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:41.802 [2024-12-13 09:22:54.127631] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.802 09:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:42.059 09:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:42.059 09:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:42.059 09:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:42.059 09:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:42.059 09:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:42.316 [2024-12-13 09:22:54.553000] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:42.316 09:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:42.316 { 00:11:42.316 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:42.316 "listen_address": { 00:11:42.316 "trtype": "tcp", 00:11:42.316 "traddr": "", 00:11:42.316 "trsvcid": "4421" 00:11:42.316 }, 00:11:42.316 "method": "nvmf_subsystem_remove_listener", 00:11:42.316 "req_id": 1 00:11:42.316 } 00:11:42.316 Got JSON-RPC error response 00:11:42.316 response: 00:11:42.316 { 00:11:42.316 "code": -32602, 00:11:42.316 "message": "Invalid parameters" 00:11:42.316 }' 00:11:42.316 09:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:42.316 { 00:11:42.316 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:42.316 "listen_address": { 00:11:42.316 "trtype": "tcp", 00:11:42.316 "traddr": "", 00:11:42.316 "trsvcid": "4421" 00:11:42.316 }, 00:11:42.316 "method": "nvmf_subsystem_remove_listener", 00:11:42.316 "req_id": 1 00:11:42.316 } 00:11:42.316 Got JSON-RPC error response 00:11:42.316 response: 00:11:42.316 { 00:11:42.316 "code": -32602, 00:11:42.316 "message": "Invalid parameters" 00:11:42.316 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:42.316 09:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3909 -i 0 00:11:42.572 [2024-12-13 09:22:54.749600] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3909: invalid cntlid range [0-65519] 00:11:42.572 09:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:42.572 { 00:11:42.572 "nqn": "nqn.2016-06.io.spdk:cnode3909", 00:11:42.572 "min_cntlid": 0, 00:11:42.572 "method": "nvmf_create_subsystem", 00:11:42.572 "req_id": 1 00:11:42.572 } 00:11:42.572 Got JSON-RPC error response 00:11:42.572 response: 00:11:42.572 { 00:11:42.572 "code": -32602, 00:11:42.572 "message": "Invalid cntlid range [0-65519]" 00:11:42.572 }' 00:11:42.572 09:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:42.572 { 00:11:42.572 "nqn": "nqn.2016-06.io.spdk:cnode3909", 00:11:42.572 "min_cntlid": 0, 00:11:42.572 "method": "nvmf_create_subsystem", 00:11:42.572 "req_id": 1 00:11:42.572 } 00:11:42.572 Got JSON-RPC error response 00:11:42.572 response: 00:11:42.572 { 00:11:42.572 "code": -32602, 00:11:42.572 "message": "Invalid cntlid range [0-65519]" 00:11:42.572 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:42.572 09:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17780 -i 65520 00:11:42.829 [2024-12-13 09:22:54.950275] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17780: invalid cntlid range [65520-65519] 00:11:42.829 09:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:42.829 { 00:11:42.829 "nqn": "nqn.2016-06.io.spdk:cnode17780", 00:11:42.829 "min_cntlid": 65520, 00:11:42.829 "method": "nvmf_create_subsystem", 00:11:42.829 "req_id": 1 00:11:42.829 } 00:11:42.829 Got JSON-RPC error response 00:11:42.829 response: 00:11:42.829 { 00:11:42.829 "code": -32602, 00:11:42.829 "message": "Invalid cntlid range [65520-65519]" 00:11:42.829 }' 00:11:42.829 09:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:42.829 { 00:11:42.829 "nqn": "nqn.2016-06.io.spdk:cnode17780", 00:11:42.829 "min_cntlid": 65520, 00:11:42.829 "method": "nvmf_create_subsystem", 00:11:42.829 "req_id": 1 00:11:42.829 } 00:11:42.829 Got JSON-RPC error response 00:11:42.829 response: 00:11:42.829 { 00:11:42.829 "code": -32602, 00:11:42.829 "message": "Invalid cntlid range [65520-65519]" 00:11:42.829 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:42.829 09:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28848 -I 0 00:11:42.829 [2024-12-13 09:22:55.146968] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28848: invalid cntlid range [1-0] 00:11:42.829 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:42.829 { 00:11:42.829 "nqn": "nqn.2016-06.io.spdk:cnode28848", 00:11:42.829 "max_cntlid": 0, 00:11:42.829 "method": "nvmf_create_subsystem", 00:11:42.829 "req_id": 1 00:11:42.829 } 00:11:42.829 Got JSON-RPC error response 00:11:42.829 response: 00:11:42.829 { 00:11:42.829 "code": -32602, 00:11:42.829 "message": "Invalid cntlid range [1-0]" 00:11:42.829 }' 00:11:42.829 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:42.829 { 00:11:42.829 "nqn": "nqn.2016-06.io.spdk:cnode28848", 00:11:42.829 "max_cntlid": 0, 00:11:42.829 "method": "nvmf_create_subsystem", 00:11:42.829 "req_id": 1 00:11:42.829 } 00:11:42.829 Got JSON-RPC error response 00:11:42.829 response: 00:11:42.829 { 00:11:42.829 "code": -32602, 00:11:42.829 "message": "Invalid cntlid range [1-0]" 00:11:42.829 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:42.829 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15706 -I 65520 00:11:43.085 [2024-12-13 09:22:55.343635] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15706: invalid cntlid range [1-65520] 00:11:43.085 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:43.085 { 00:11:43.085 "nqn": "nqn.2016-06.io.spdk:cnode15706", 00:11:43.085 "max_cntlid": 65520, 00:11:43.085 "method": "nvmf_create_subsystem", 00:11:43.085 "req_id": 1 00:11:43.085 } 00:11:43.085 Got JSON-RPC error response 00:11:43.085 response: 00:11:43.085 { 00:11:43.085 "code": -32602, 00:11:43.085 "message": "Invalid cntlid range [1-65520]" 00:11:43.085 }' 00:11:43.085 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:43.085 { 00:11:43.085 "nqn": "nqn.2016-06.io.spdk:cnode15706", 00:11:43.085 "max_cntlid": 65520, 00:11:43.085 "method": "nvmf_create_subsystem", 00:11:43.085 "req_id": 1 00:11:43.085 } 00:11:43.085 Got JSON-RPC error response 00:11:43.085 response: 00:11:43.085 { 00:11:43.085 "code": -32602, 00:11:43.085 "message": "Invalid cntlid range [1-65520]" 00:11:43.085 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:43.085 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23674 -i 6 -I 5 00:11:43.342 [2024-12-13 09:22:55.544342] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23674: invalid cntlid range [6-5] 00:11:43.342 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:43.342 { 00:11:43.342 "nqn": "nqn.2016-06.io.spdk:cnode23674", 00:11:43.342 "min_cntlid": 6, 00:11:43.342 "max_cntlid": 5, 00:11:43.342 "method": "nvmf_create_subsystem", 00:11:43.342 "req_id": 1 00:11:43.342 } 00:11:43.342 Got JSON-RPC error response 00:11:43.342 response: 00:11:43.342 { 00:11:43.342 "code": -32602, 00:11:43.342 "message": "Invalid cntlid range [6-5]" 00:11:43.342 }' 00:11:43.342 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:43.342 { 00:11:43.342 "nqn": "nqn.2016-06.io.spdk:cnode23674", 00:11:43.342 "min_cntlid": 6, 00:11:43.342 "max_cntlid": 5, 00:11:43.342 "method": "nvmf_create_subsystem", 00:11:43.342 "req_id": 1 00:11:43.342 } 00:11:43.342 Got JSON-RPC error response 00:11:43.342 response: 00:11:43.342 { 00:11:43.342 "code": -32602, 00:11:43.342 "message": "Invalid cntlid range [6-5]" 00:11:43.342 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:43.342 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:43.342 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:43.342 { 00:11:43.342 "name": "foobar", 00:11:43.342 "method": "nvmf_delete_target", 00:11:43.342 "req_id": 1 00:11:43.342 } 00:11:43.342 Got JSON-RPC error response 00:11:43.342 response: 00:11:43.342 { 00:11:43.342 "code": -32602, 00:11:43.342 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:43.342 }' 00:11:43.342 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:43.342 { 00:11:43.342 "name": "foobar", 00:11:43.342 "method": "nvmf_delete_target", 00:11:43.342 "req_id": 1 00:11:43.342 } 00:11:43.342 Got JSON-RPC error response 00:11:43.342 response: 00:11:43.342 { 00:11:43.342 "code": -32602, 00:11:43.342 "message": "The specified target doesn't exist, cannot delete it." 00:11:43.342 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:43.342 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:43.342 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:43.342 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.342 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:11:43.342 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:43.342 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:11:43.342 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.342 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:43.342 rmmod nvme_tcp 00:11:43.599 rmmod nvme_fabrics 00:11:43.599 rmmod nvme_keyring 00:11:43.599 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.599 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:11:43.599 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:11:43.599 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3264946 ']' 00:11:43.599 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3264946 00:11:43.599 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3264946 ']' 00:11:43.599 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3264946 00:11:43.599 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:43.599 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.599 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3264946 00:11:43.599 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.599 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.599 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3264946' 00:11:43.599 killing process with pid 3264946 00:11:43.599 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3264946 00:11:43.599 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3264946 00:11:43.857 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.857 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:43.857 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:43.857 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:11:43.857 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:11:43.857 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:11:43.857 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:43.857 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.857 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.857 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.857 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.857 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.756 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:45.756 00:11:45.756 real 0m11.457s 00:11:45.757 user 0m18.535s 00:11:45.757 sys 0m4.964s 00:11:45.757 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.757 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:45.757 ************************************ 00:11:45.757 END TEST nvmf_invalid 00:11:45.757 ************************************ 00:11:45.757 09:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:45.757 09:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:45.757 09:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.757 09:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:45.757 ************************************ 00:11:45.757 START TEST nvmf_connect_stress 00:11:45.757 ************************************ 00:11:45.757 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:46.014 * Looking for test storage... 00:11:46.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.014 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:46.014 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:11:46.014 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:46.014 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:46.014 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:46.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.015 --rc genhtml_branch_coverage=1 00:11:46.015 --rc genhtml_function_coverage=1 00:11:46.015 --rc genhtml_legend=1 00:11:46.015 --rc geninfo_all_blocks=1 00:11:46.015 --rc geninfo_unexecuted_blocks=1 00:11:46.015 00:11:46.015 ' 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:46.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.015 --rc genhtml_branch_coverage=1 00:11:46.015 --rc genhtml_function_coverage=1 00:11:46.015 --rc genhtml_legend=1 00:11:46.015 --rc geninfo_all_blocks=1 00:11:46.015 --rc geninfo_unexecuted_blocks=1 00:11:46.015 00:11:46.015 ' 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:46.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.015 --rc genhtml_branch_coverage=1 00:11:46.015 --rc genhtml_function_coverage=1 00:11:46.015 --rc genhtml_legend=1 00:11:46.015 --rc geninfo_all_blocks=1 00:11:46.015 --rc geninfo_unexecuted_blocks=1 00:11:46.015 00:11:46.015 ' 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:46.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.015 --rc genhtml_branch_coverage=1 00:11:46.015 --rc genhtml_function_coverage=1 00:11:46.015 --rc genhtml_legend=1 00:11:46.015 --rc geninfo_all_blocks=1 00:11:46.015 --rc geninfo_unexecuted_blocks=1 00:11:46.015 00:11:46.015 ' 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:46.015 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.016 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.016 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.016 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:46.016 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:46.016 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:46.016 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.573 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:52.574 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:52.574 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:52.574 Found net devices under 0000:af:00.0: cvl_0_0 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:52.574 Found net devices under 0000:af:00.1: cvl_0_1 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.574 09:23:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:11:52.574 00:11:52.574 --- 10.0.0.2 ping statistics --- 00:11:52.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.574 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:11:52.574 00:11:52.574 --- 10.0.0.1 ping statistics --- 00:11:52.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.574 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3269250 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3269250 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3269250 ']' 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.574 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.574 [2024-12-13 09:23:04.134441] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:11:52.574 [2024-12-13 09:23:04.134496] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.574 [2024-12-13 09:23:04.204947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:52.575 [2024-12-13 09:23:04.249155] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.575 [2024-12-13 09:23:04.249189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.575 [2024-12-13 09:23:04.249197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.575 [2024-12-13 09:23:04.249203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.575 [2024-12-13 09:23:04.249208] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.575 [2024-12-13 09:23:04.250454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.575 [2024-12-13 09:23:04.250534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.575 [2024-12-13 09:23:04.250536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.575 [2024-12-13 09:23:04.399009] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.575 [2024-12-13 09:23:04.419232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.575 NULL1 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3269278 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.575 09:23:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.833 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.833 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:52.833 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.833 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.833 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.396 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.396 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:53.396 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.396 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.396 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.653 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.653 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:53.653 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.653 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.653 09:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.910 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.911 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:53.911 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.911 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.911 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.168 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.168 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:54.168 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.168 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.168 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.732 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.732 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:54.732 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.732 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.732 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.990 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.990 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:54.990 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.990 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.990 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.246 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.246 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:55.246 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.246 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.246 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.503 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.503 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:55.503 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.503 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.503 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.761 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.761 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:55.761 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.761 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.761 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.325 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.325 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:56.325 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.325 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.325 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.582 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.582 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:56.582 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.582 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.582 09:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.839 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.839 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:56.839 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.839 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.839 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.096 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.096 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:57.096 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.096 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.096 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.660 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.660 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:57.660 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.660 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.660 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.917 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.917 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:57.917 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.917 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.917 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.174 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.174 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:58.174 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.174 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.174 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.431 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.431 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:58.431 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.431 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.431 09:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.688 09:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.688 09:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:58.688 09:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.688 09:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.688 09:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.252 09:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.252 09:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:59.252 09:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.252 09:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.252 09:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.509 09:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.509 09:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:59.509 09:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.509 09:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.509 09:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.767 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.767 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:11:59.767 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.767 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.767 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.025 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.025 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:12:00.025 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.025 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.025 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.588 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.588 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:12:00.588 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.588 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.588 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.863 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.863 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:12:00.863 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.863 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.863 09:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.120 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.120 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:12:01.120 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.120 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.120 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.377 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.377 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:12:01.377 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.377 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.377 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.635 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.635 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:12:01.635 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.635 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.635 09:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.200 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.200 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:12:02.200 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.200 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.200 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.200 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:02.458 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.458 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3269278 00:12:02.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3269278) - No such process 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3269278 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.459 rmmod nvme_tcp 00:12:02.459 rmmod nvme_fabrics 00:12:02.459 rmmod nvme_keyring 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3269250 ']' 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3269250 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3269250 ']' 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3269250 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3269250 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3269250' 00:12:02.459 killing process with pid 3269250 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3269250 00:12:02.459 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3269250 00:12:02.717 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:02.717 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:02.717 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:02.717 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:02.717 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:02.717 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:02.717 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:02.717 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:02.717 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:02.717 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.717 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.717 09:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.619 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:04.878 00:12:04.878 real 0m18.867s 00:12:04.878 user 0m39.240s 00:12:04.878 sys 0m8.521s 00:12:04.878 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.878 09:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.878 ************************************ 00:12:04.878 END TEST nvmf_connect_stress 00:12:04.878 ************************************ 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:04.878 ************************************ 00:12:04.878 START TEST nvmf_fused_ordering 00:12:04.878 ************************************ 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:04.878 * Looking for test storage... 00:12:04.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:04.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.878 --rc genhtml_branch_coverage=1 00:12:04.878 --rc genhtml_function_coverage=1 00:12:04.878 --rc genhtml_legend=1 00:12:04.878 --rc geninfo_all_blocks=1 00:12:04.878 --rc geninfo_unexecuted_blocks=1 00:12:04.878 00:12:04.878 ' 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:04.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.878 --rc genhtml_branch_coverage=1 00:12:04.878 --rc genhtml_function_coverage=1 00:12:04.878 --rc genhtml_legend=1 00:12:04.878 --rc geninfo_all_blocks=1 00:12:04.878 --rc geninfo_unexecuted_blocks=1 00:12:04.878 00:12:04.878 ' 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:04.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.878 --rc genhtml_branch_coverage=1 00:12:04.878 --rc genhtml_function_coverage=1 00:12:04.878 --rc genhtml_legend=1 00:12:04.878 --rc geninfo_all_blocks=1 00:12:04.878 --rc geninfo_unexecuted_blocks=1 00:12:04.878 00:12:04.878 ' 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:04.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.878 --rc genhtml_branch_coverage=1 00:12:04.878 --rc genhtml_function_coverage=1 00:12:04.878 --rc genhtml_legend=1 00:12:04.878 --rc geninfo_all_blocks=1 00:12:04.878 --rc geninfo_unexecuted_blocks=1 00:12:04.878 00:12:04.878 ' 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.878 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:04.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:04.879 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.137 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:05.137 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.137 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.137 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.137 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.137 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.137 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.137 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.137 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.137 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.137 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.137 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.137 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:10.399 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:10.399 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:10.399 Found net devices under 0000:af:00.0: cvl_0_0 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:10.399 Found net devices under 0000:af:00.1: cvl_0_1 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.399 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:10.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:12:10.400 00:12:10.400 --- 10.0.0.2 ping statistics --- 00:12:10.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.400 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:12:10.400 00:12:10.400 --- 10.0.0.1 ping statistics --- 00:12:10.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.400 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:10.400 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:10.658 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:10.658 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:10.658 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:10.658 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.658 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3274331 00:12:10.658 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:10.658 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3274331 00:12:10.658 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3274331 ']' 00:12:10.658 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.658 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.658 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.658 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.658 09:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.658 [2024-12-13 09:23:22.832356] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:12:10.658 [2024-12-13 09:23:22.832402] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.658 [2024-12-13 09:23:22.898313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.658 [2024-12-13 09:23:22.937124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.658 [2024-12-13 09:23:22.937160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.658 [2024-12-13 09:23:22.937167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.658 [2024-12-13 09:23:22.937173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.658 [2024-12-13 09:23:22.937177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.658 [2024-12-13 09:23:22.937685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.916 [2024-12-13 09:23:23.068078] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.916 [2024-12-13 09:23:23.088250] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.916 NULL1 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.916 09:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:10.916 [2024-12-13 09:23:23.145657] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:12:10.916 [2024-12-13 09:23:23.145687] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3274539 ] 00:12:11.174 Attached to nqn.2016-06.io.spdk:cnode1 00:12:11.174 Namespace ID: 1 size: 1GB 00:12:11.174 fused_ordering(0) 00:12:11.174 fused_ordering(1) 00:12:11.174 fused_ordering(2) 00:12:11.174 fused_ordering(3) 00:12:11.174 fused_ordering(4) 00:12:11.174 fused_ordering(5) 00:12:11.174 fused_ordering(6) 00:12:11.174 fused_ordering(7) 00:12:11.174 fused_ordering(8) 00:12:11.174 fused_ordering(9) 00:12:11.174 fused_ordering(10) 00:12:11.174 fused_ordering(11) 00:12:11.174 fused_ordering(12) 00:12:11.174 fused_ordering(13) 00:12:11.174 fused_ordering(14) 00:12:11.174 fused_ordering(15) 00:12:11.174 fused_ordering(16) 00:12:11.174 fused_ordering(17) 00:12:11.174 fused_ordering(18) 00:12:11.174 fused_ordering(19) 00:12:11.174 fused_ordering(20) 00:12:11.174 fused_ordering(21) 00:12:11.174 fused_ordering(22) 00:12:11.174 fused_ordering(23) 00:12:11.174 fused_ordering(24) 00:12:11.174 fused_ordering(25) 00:12:11.174 fused_ordering(26) 00:12:11.174 fused_ordering(27) 00:12:11.174 fused_ordering(28) 00:12:11.174 fused_ordering(29) 00:12:11.174 fused_ordering(30) 00:12:11.174 fused_ordering(31) 00:12:11.174 fused_ordering(32) 00:12:11.174 fused_ordering(33) 00:12:11.174 fused_ordering(34) 00:12:11.174 fused_ordering(35) 00:12:11.174 fused_ordering(36) 00:12:11.174 fused_ordering(37) 00:12:11.174 fused_ordering(38) 00:12:11.174 fused_ordering(39) 00:12:11.174 fused_ordering(40) 00:12:11.174 fused_ordering(41) 00:12:11.174 fused_ordering(42) 00:12:11.174 fused_ordering(43) 00:12:11.174 fused_ordering(44) 00:12:11.174 fused_ordering(45) 00:12:11.174 fused_ordering(46) 00:12:11.174 fused_ordering(47) 00:12:11.174 fused_ordering(48) 00:12:11.174 fused_ordering(49) 00:12:11.174 fused_ordering(50) 00:12:11.174 fused_ordering(51) 00:12:11.174 fused_ordering(52) 00:12:11.174 fused_ordering(53) 00:12:11.174 fused_ordering(54) 00:12:11.174 fused_ordering(55) 00:12:11.174 fused_ordering(56) 00:12:11.174 fused_ordering(57) 00:12:11.174 fused_ordering(58) 00:12:11.174 fused_ordering(59) 00:12:11.174 fused_ordering(60) 00:12:11.174 fused_ordering(61) 00:12:11.174 fused_ordering(62) 00:12:11.174 fused_ordering(63) 00:12:11.174 fused_ordering(64) 00:12:11.174 fused_ordering(65) 00:12:11.174 fused_ordering(66) 00:12:11.174 fused_ordering(67) 00:12:11.174 fused_ordering(68) 00:12:11.174 fused_ordering(69) 00:12:11.174 fused_ordering(70) 00:12:11.174 fused_ordering(71) 00:12:11.174 fused_ordering(72) 00:12:11.174 fused_ordering(73) 00:12:11.174 fused_ordering(74) 00:12:11.174 fused_ordering(75) 00:12:11.174 fused_ordering(76) 00:12:11.174 fused_ordering(77) 00:12:11.174 fused_ordering(78) 00:12:11.174 fused_ordering(79) 00:12:11.174 fused_ordering(80) 00:12:11.174 fused_ordering(81) 00:12:11.174 fused_ordering(82) 00:12:11.174 fused_ordering(83) 00:12:11.174 fused_ordering(84) 00:12:11.174 fused_ordering(85) 00:12:11.174 fused_ordering(86) 00:12:11.174 fused_ordering(87) 00:12:11.174 fused_ordering(88) 00:12:11.174 fused_ordering(89) 00:12:11.174 fused_ordering(90) 00:12:11.174 fused_ordering(91) 00:12:11.174 fused_ordering(92) 00:12:11.175 fused_ordering(93) 00:12:11.175 fused_ordering(94) 00:12:11.175 fused_ordering(95) 00:12:11.175 fused_ordering(96) 00:12:11.175 fused_ordering(97) 00:12:11.175 fused_ordering(98) 00:12:11.175 fused_ordering(99) 00:12:11.175 fused_ordering(100) 00:12:11.175 fused_ordering(101) 00:12:11.175 fused_ordering(102) 00:12:11.175 fused_ordering(103) 00:12:11.175 fused_ordering(104) 00:12:11.175 fused_ordering(105) 00:12:11.175 fused_ordering(106) 00:12:11.175 fused_ordering(107) 00:12:11.175 fused_ordering(108) 00:12:11.175 fused_ordering(109) 00:12:11.175 fused_ordering(110) 00:12:11.175 fused_ordering(111) 00:12:11.175 fused_ordering(112) 00:12:11.175 fused_ordering(113) 00:12:11.175 fused_ordering(114) 00:12:11.175 fused_ordering(115) 00:12:11.175 fused_ordering(116) 00:12:11.175 fused_ordering(117) 00:12:11.175 fused_ordering(118) 00:12:11.175 fused_ordering(119) 00:12:11.175 fused_ordering(120) 00:12:11.175 fused_ordering(121) 00:12:11.175 fused_ordering(122) 00:12:11.175 fused_ordering(123) 00:12:11.175 fused_ordering(124) 00:12:11.175 fused_ordering(125) 00:12:11.175 fused_ordering(126) 00:12:11.175 fused_ordering(127) 00:12:11.175 fused_ordering(128) 00:12:11.175 fused_ordering(129) 00:12:11.175 fused_ordering(130) 00:12:11.175 fused_ordering(131) 00:12:11.175 fused_ordering(132) 00:12:11.175 fused_ordering(133) 00:12:11.175 fused_ordering(134) 00:12:11.175 fused_ordering(135) 00:12:11.175 fused_ordering(136) 00:12:11.175 fused_ordering(137) 00:12:11.175 fused_ordering(138) 00:12:11.175 fused_ordering(139) 00:12:11.175 fused_ordering(140) 00:12:11.175 fused_ordering(141) 00:12:11.175 fused_ordering(142) 00:12:11.175 fused_ordering(143) 00:12:11.175 fused_ordering(144) 00:12:11.175 fused_ordering(145) 00:12:11.175 fused_ordering(146) 00:12:11.175 fused_ordering(147) 00:12:11.175 fused_ordering(148) 00:12:11.175 fused_ordering(149) 00:12:11.175 fused_ordering(150) 00:12:11.175 fused_ordering(151) 00:12:11.175 fused_ordering(152) 00:12:11.175 fused_ordering(153) 00:12:11.175 fused_ordering(154) 00:12:11.175 fused_ordering(155) 00:12:11.175 fused_ordering(156) 00:12:11.175 fused_ordering(157) 00:12:11.175 fused_ordering(158) 00:12:11.175 fused_ordering(159) 00:12:11.175 fused_ordering(160) 00:12:11.175 fused_ordering(161) 00:12:11.175 fused_ordering(162) 00:12:11.175 fused_ordering(163) 00:12:11.175 fused_ordering(164) 00:12:11.175 fused_ordering(165) 00:12:11.175 fused_ordering(166) 00:12:11.175 fused_ordering(167) 00:12:11.175 fused_ordering(168) 00:12:11.175 fused_ordering(169) 00:12:11.175 fused_ordering(170) 00:12:11.175 fused_ordering(171) 00:12:11.175 fused_ordering(172) 00:12:11.175 fused_ordering(173) 00:12:11.175 fused_ordering(174) 00:12:11.175 fused_ordering(175) 00:12:11.175 fused_ordering(176) 00:12:11.175 fused_ordering(177) 00:12:11.175 fused_ordering(178) 00:12:11.175 fused_ordering(179) 00:12:11.175 fused_ordering(180) 00:12:11.175 fused_ordering(181) 00:12:11.175 fused_ordering(182) 00:12:11.175 fused_ordering(183) 00:12:11.175 fused_ordering(184) 00:12:11.175 fused_ordering(185) 00:12:11.175 fused_ordering(186) 00:12:11.175 fused_ordering(187) 00:12:11.175 fused_ordering(188) 00:12:11.175 fused_ordering(189) 00:12:11.175 fused_ordering(190) 00:12:11.175 fused_ordering(191) 00:12:11.175 fused_ordering(192) 00:12:11.175 fused_ordering(193) 00:12:11.175 fused_ordering(194) 00:12:11.175 fused_ordering(195) 00:12:11.175 fused_ordering(196) 00:12:11.175 fused_ordering(197) 00:12:11.175 fused_ordering(198) 00:12:11.175 fused_ordering(199) 00:12:11.175 fused_ordering(200) 00:12:11.175 fused_ordering(201) 00:12:11.175 fused_ordering(202) 00:12:11.175 fused_ordering(203) 00:12:11.175 fused_ordering(204) 00:12:11.175 fused_ordering(205) 00:12:11.433 fused_ordering(206) 00:12:11.433 fused_ordering(207) 00:12:11.433 fused_ordering(208) 00:12:11.433 fused_ordering(209) 00:12:11.433 fused_ordering(210) 00:12:11.433 fused_ordering(211) 00:12:11.433 fused_ordering(212) 00:12:11.433 fused_ordering(213) 00:12:11.433 fused_ordering(214) 00:12:11.433 fused_ordering(215) 00:12:11.433 fused_ordering(216) 00:12:11.433 fused_ordering(217) 00:12:11.433 fused_ordering(218) 00:12:11.433 fused_ordering(219) 00:12:11.433 fused_ordering(220) 00:12:11.433 fused_ordering(221) 00:12:11.433 fused_ordering(222) 00:12:11.433 fused_ordering(223) 00:12:11.433 fused_ordering(224) 00:12:11.433 fused_ordering(225) 00:12:11.433 fused_ordering(226) 00:12:11.433 fused_ordering(227) 00:12:11.433 fused_ordering(228) 00:12:11.433 fused_ordering(229) 00:12:11.433 fused_ordering(230) 00:12:11.433 fused_ordering(231) 00:12:11.433 fused_ordering(232) 00:12:11.433 fused_ordering(233) 00:12:11.433 fused_ordering(234) 00:12:11.433 fused_ordering(235) 00:12:11.433 fused_ordering(236) 00:12:11.433 fused_ordering(237) 00:12:11.433 fused_ordering(238) 00:12:11.433 fused_ordering(239) 00:12:11.433 fused_ordering(240) 00:12:11.433 fused_ordering(241) 00:12:11.433 fused_ordering(242) 00:12:11.433 fused_ordering(243) 00:12:11.433 fused_ordering(244) 00:12:11.433 fused_ordering(245) 00:12:11.433 fused_ordering(246) 00:12:11.433 fused_ordering(247) 00:12:11.433 fused_ordering(248) 00:12:11.433 fused_ordering(249) 00:12:11.433 fused_ordering(250) 00:12:11.433 fused_ordering(251) 00:12:11.433 fused_ordering(252) 00:12:11.433 fused_ordering(253) 00:12:11.433 fused_ordering(254) 00:12:11.433 fused_ordering(255) 00:12:11.433 fused_ordering(256) 00:12:11.433 fused_ordering(257) 00:12:11.433 fused_ordering(258) 00:12:11.433 fused_ordering(259) 00:12:11.433 fused_ordering(260) 00:12:11.433 fused_ordering(261) 00:12:11.433 fused_ordering(262) 00:12:11.433 fused_ordering(263) 00:12:11.433 fused_ordering(264) 00:12:11.433 fused_ordering(265) 00:12:11.433 fused_ordering(266) 00:12:11.433 fused_ordering(267) 00:12:11.433 fused_ordering(268) 00:12:11.433 fused_ordering(269) 00:12:11.433 fused_ordering(270) 00:12:11.433 fused_ordering(271) 00:12:11.433 fused_ordering(272) 00:12:11.433 fused_ordering(273) 00:12:11.433 fused_ordering(274) 00:12:11.433 fused_ordering(275) 00:12:11.433 fused_ordering(276) 00:12:11.433 fused_ordering(277) 00:12:11.433 fused_ordering(278) 00:12:11.433 fused_ordering(279) 00:12:11.433 fused_ordering(280) 00:12:11.433 fused_ordering(281) 00:12:11.433 fused_ordering(282) 00:12:11.433 fused_ordering(283) 00:12:11.433 fused_ordering(284) 00:12:11.433 fused_ordering(285) 00:12:11.433 fused_ordering(286) 00:12:11.433 fused_ordering(287) 00:12:11.433 fused_ordering(288) 00:12:11.433 fused_ordering(289) 00:12:11.433 fused_ordering(290) 00:12:11.433 fused_ordering(291) 00:12:11.433 fused_ordering(292) 00:12:11.433 fused_ordering(293) 00:12:11.433 fused_ordering(294) 00:12:11.433 fused_ordering(295) 00:12:11.433 fused_ordering(296) 00:12:11.433 fused_ordering(297) 00:12:11.433 fused_ordering(298) 00:12:11.433 fused_ordering(299) 00:12:11.433 fused_ordering(300) 00:12:11.433 fused_ordering(301) 00:12:11.433 fused_ordering(302) 00:12:11.433 fused_ordering(303) 00:12:11.433 fused_ordering(304) 00:12:11.433 fused_ordering(305) 00:12:11.433 fused_ordering(306) 00:12:11.433 fused_ordering(307) 00:12:11.433 fused_ordering(308) 00:12:11.433 fused_ordering(309) 00:12:11.433 fused_ordering(310) 00:12:11.433 fused_ordering(311) 00:12:11.433 fused_ordering(312) 00:12:11.433 fused_ordering(313) 00:12:11.433 fused_ordering(314) 00:12:11.433 fused_ordering(315) 00:12:11.433 fused_ordering(316) 00:12:11.433 fused_ordering(317) 00:12:11.433 fused_ordering(318) 00:12:11.433 fused_ordering(319) 00:12:11.433 fused_ordering(320) 00:12:11.433 fused_ordering(321) 00:12:11.433 fused_ordering(322) 00:12:11.433 fused_ordering(323) 00:12:11.433 fused_ordering(324) 00:12:11.433 fused_ordering(325) 00:12:11.433 fused_ordering(326) 00:12:11.433 fused_ordering(327) 00:12:11.433 fused_ordering(328) 00:12:11.433 fused_ordering(329) 00:12:11.433 fused_ordering(330) 00:12:11.433 fused_ordering(331) 00:12:11.433 fused_ordering(332) 00:12:11.433 fused_ordering(333) 00:12:11.433 fused_ordering(334) 00:12:11.433 fused_ordering(335) 00:12:11.433 fused_ordering(336) 00:12:11.433 fused_ordering(337) 00:12:11.433 fused_ordering(338) 00:12:11.433 fused_ordering(339) 00:12:11.433 fused_ordering(340) 00:12:11.433 fused_ordering(341) 00:12:11.433 fused_ordering(342) 00:12:11.433 fused_ordering(343) 00:12:11.433 fused_ordering(344) 00:12:11.433 fused_ordering(345) 00:12:11.433 fused_ordering(346) 00:12:11.433 fused_ordering(347) 00:12:11.433 fused_ordering(348) 00:12:11.433 fused_ordering(349) 00:12:11.433 fused_ordering(350) 00:12:11.433 fused_ordering(351) 00:12:11.433 fused_ordering(352) 00:12:11.433 fused_ordering(353) 00:12:11.433 fused_ordering(354) 00:12:11.433 fused_ordering(355) 00:12:11.433 fused_ordering(356) 00:12:11.433 fused_ordering(357) 00:12:11.433 fused_ordering(358) 00:12:11.433 fused_ordering(359) 00:12:11.433 fused_ordering(360) 00:12:11.433 fused_ordering(361) 00:12:11.433 fused_ordering(362) 00:12:11.433 fused_ordering(363) 00:12:11.433 fused_ordering(364) 00:12:11.433 fused_ordering(365) 00:12:11.433 fused_ordering(366) 00:12:11.433 fused_ordering(367) 00:12:11.433 fused_ordering(368) 00:12:11.433 fused_ordering(369) 00:12:11.433 fused_ordering(370) 00:12:11.433 fused_ordering(371) 00:12:11.433 fused_ordering(372) 00:12:11.433 fused_ordering(373) 00:12:11.433 fused_ordering(374) 00:12:11.433 fused_ordering(375) 00:12:11.433 fused_ordering(376) 00:12:11.433 fused_ordering(377) 00:12:11.433 fused_ordering(378) 00:12:11.433 fused_ordering(379) 00:12:11.433 fused_ordering(380) 00:12:11.433 fused_ordering(381) 00:12:11.433 fused_ordering(382) 00:12:11.433 fused_ordering(383) 00:12:11.433 fused_ordering(384) 00:12:11.433 fused_ordering(385) 00:12:11.433 fused_ordering(386) 00:12:11.433 fused_ordering(387) 00:12:11.433 fused_ordering(388) 00:12:11.433 fused_ordering(389) 00:12:11.433 fused_ordering(390) 00:12:11.434 fused_ordering(391) 00:12:11.434 fused_ordering(392) 00:12:11.434 fused_ordering(393) 00:12:11.434 fused_ordering(394) 00:12:11.434 fused_ordering(395) 00:12:11.434 fused_ordering(396) 00:12:11.434 fused_ordering(397) 00:12:11.434 fused_ordering(398) 00:12:11.434 fused_ordering(399) 00:12:11.434 fused_ordering(400) 00:12:11.434 fused_ordering(401) 00:12:11.434 fused_ordering(402) 00:12:11.434 fused_ordering(403) 00:12:11.434 fused_ordering(404) 00:12:11.434 fused_ordering(405) 00:12:11.434 fused_ordering(406) 00:12:11.434 fused_ordering(407) 00:12:11.434 fused_ordering(408) 00:12:11.434 fused_ordering(409) 00:12:11.434 fused_ordering(410) 00:12:11.691 fused_ordering(411) 00:12:11.691 fused_ordering(412) 00:12:11.691 fused_ordering(413) 00:12:11.691 fused_ordering(414) 00:12:11.691 fused_ordering(415) 00:12:11.691 fused_ordering(416) 00:12:11.691 fused_ordering(417) 00:12:11.691 fused_ordering(418) 00:12:11.691 fused_ordering(419) 00:12:11.691 fused_ordering(420) 00:12:11.691 fused_ordering(421) 00:12:11.691 fused_ordering(422) 00:12:11.691 fused_ordering(423) 00:12:11.691 fused_ordering(424) 00:12:11.691 fused_ordering(425) 00:12:11.691 fused_ordering(426) 00:12:11.691 fused_ordering(427) 00:12:11.691 fused_ordering(428) 00:12:11.691 fused_ordering(429) 00:12:11.691 fused_ordering(430) 00:12:11.691 fused_ordering(431) 00:12:11.691 fused_ordering(432) 00:12:11.691 fused_ordering(433) 00:12:11.691 fused_ordering(434) 00:12:11.691 fused_ordering(435) 00:12:11.691 fused_ordering(436) 00:12:11.691 fused_ordering(437) 00:12:11.691 fused_ordering(438) 00:12:11.691 fused_ordering(439) 00:12:11.691 fused_ordering(440) 00:12:11.691 fused_ordering(441) 00:12:11.692 fused_ordering(442) 00:12:11.692 fused_ordering(443) 00:12:11.692 fused_ordering(444) 00:12:11.692 fused_ordering(445) 00:12:11.692 fused_ordering(446) 00:12:11.692 fused_ordering(447) 00:12:11.692 fused_ordering(448) 00:12:11.692 fused_ordering(449) 00:12:11.692 fused_ordering(450) 00:12:11.692 fused_ordering(451) 00:12:11.692 fused_ordering(452) 00:12:11.692 fused_ordering(453) 00:12:11.692 fused_ordering(454) 00:12:11.692 fused_ordering(455) 00:12:11.692 fused_ordering(456) 00:12:11.692 fused_ordering(457) 00:12:11.692 fused_ordering(458) 00:12:11.692 fused_ordering(459) 00:12:11.692 fused_ordering(460) 00:12:11.692 fused_ordering(461) 00:12:11.692 fused_ordering(462) 00:12:11.692 fused_ordering(463) 00:12:11.692 fused_ordering(464) 00:12:11.692 fused_ordering(465) 00:12:11.692 fused_ordering(466) 00:12:11.692 fused_ordering(467) 00:12:11.692 fused_ordering(468) 00:12:11.692 fused_ordering(469) 00:12:11.692 fused_ordering(470) 00:12:11.692 fused_ordering(471) 00:12:11.692 fused_ordering(472) 00:12:11.692 fused_ordering(473) 00:12:11.692 fused_ordering(474) 00:12:11.692 fused_ordering(475) 00:12:11.692 fused_ordering(476) 00:12:11.692 fused_ordering(477) 00:12:11.692 fused_ordering(478) 00:12:11.692 fused_ordering(479) 00:12:11.692 fused_ordering(480) 00:12:11.692 fused_ordering(481) 00:12:11.692 fused_ordering(482) 00:12:11.692 fused_ordering(483) 00:12:11.692 fused_ordering(484) 00:12:11.692 fused_ordering(485) 00:12:11.692 fused_ordering(486) 00:12:11.692 fused_ordering(487) 00:12:11.692 fused_ordering(488) 00:12:11.692 fused_ordering(489) 00:12:11.692 fused_ordering(490) 00:12:11.692 fused_ordering(491) 00:12:11.692 fused_ordering(492) 00:12:11.692 fused_ordering(493) 00:12:11.692 fused_ordering(494) 00:12:11.692 fused_ordering(495) 00:12:11.692 fused_ordering(496) 00:12:11.692 fused_ordering(497) 00:12:11.692 fused_ordering(498) 00:12:11.692 fused_ordering(499) 00:12:11.692 fused_ordering(500) 00:12:11.692 fused_ordering(501) 00:12:11.692 fused_ordering(502) 00:12:11.692 fused_ordering(503) 00:12:11.692 fused_ordering(504) 00:12:11.692 fused_ordering(505) 00:12:11.692 fused_ordering(506) 00:12:11.692 fused_ordering(507) 00:12:11.692 fused_ordering(508) 00:12:11.692 fused_ordering(509) 00:12:11.692 fused_ordering(510) 00:12:11.692 fused_ordering(511) 00:12:11.692 fused_ordering(512) 00:12:11.692 fused_ordering(513) 00:12:11.692 fused_ordering(514) 00:12:11.692 fused_ordering(515) 00:12:11.692 fused_ordering(516) 00:12:11.692 fused_ordering(517) 00:12:11.692 fused_ordering(518) 00:12:11.692 fused_ordering(519) 00:12:11.692 fused_ordering(520) 00:12:11.692 fused_ordering(521) 00:12:11.692 fused_ordering(522) 00:12:11.692 fused_ordering(523) 00:12:11.692 fused_ordering(524) 00:12:11.692 fused_ordering(525) 00:12:11.692 fused_ordering(526) 00:12:11.692 fused_ordering(527) 00:12:11.692 fused_ordering(528) 00:12:11.692 fused_ordering(529) 00:12:11.692 fused_ordering(530) 00:12:11.692 fused_ordering(531) 00:12:11.692 fused_ordering(532) 00:12:11.692 fused_ordering(533) 00:12:11.692 fused_ordering(534) 00:12:11.692 fused_ordering(535) 00:12:11.692 fused_ordering(536) 00:12:11.692 fused_ordering(537) 00:12:11.692 fused_ordering(538) 00:12:11.692 fused_ordering(539) 00:12:11.692 fused_ordering(540) 00:12:11.692 fused_ordering(541) 00:12:11.692 fused_ordering(542) 00:12:11.692 fused_ordering(543) 00:12:11.692 fused_ordering(544) 00:12:11.692 fused_ordering(545) 00:12:11.692 fused_ordering(546) 00:12:11.692 fused_ordering(547) 00:12:11.692 fused_ordering(548) 00:12:11.692 fused_ordering(549) 00:12:11.692 fused_ordering(550) 00:12:11.692 fused_ordering(551) 00:12:11.692 fused_ordering(552) 00:12:11.692 fused_ordering(553) 00:12:11.692 fused_ordering(554) 00:12:11.692 fused_ordering(555) 00:12:11.692 fused_ordering(556) 00:12:11.692 fused_ordering(557) 00:12:11.692 fused_ordering(558) 00:12:11.692 fused_ordering(559) 00:12:11.692 fused_ordering(560) 00:12:11.692 fused_ordering(561) 00:12:11.692 fused_ordering(562) 00:12:11.692 fused_ordering(563) 00:12:11.692 fused_ordering(564) 00:12:11.692 fused_ordering(565) 00:12:11.692 fused_ordering(566) 00:12:11.692 fused_ordering(567) 00:12:11.692 fused_ordering(568) 00:12:11.692 fused_ordering(569) 00:12:11.692 fused_ordering(570) 00:12:11.692 fused_ordering(571) 00:12:11.692 fused_ordering(572) 00:12:11.692 fused_ordering(573) 00:12:11.692 fused_ordering(574) 00:12:11.692 fused_ordering(575) 00:12:11.692 fused_ordering(576) 00:12:11.692 fused_ordering(577) 00:12:11.692 fused_ordering(578) 00:12:11.692 fused_ordering(579) 00:12:11.692 fused_ordering(580) 00:12:11.692 fused_ordering(581) 00:12:11.692 fused_ordering(582) 00:12:11.692 fused_ordering(583) 00:12:11.692 fused_ordering(584) 00:12:11.692 fused_ordering(585) 00:12:11.692 fused_ordering(586) 00:12:11.692 fused_ordering(587) 00:12:11.692 fused_ordering(588) 00:12:11.692 fused_ordering(589) 00:12:11.692 fused_ordering(590) 00:12:11.692 fused_ordering(591) 00:12:11.692 fused_ordering(592) 00:12:11.692 fused_ordering(593) 00:12:11.692 fused_ordering(594) 00:12:11.692 fused_ordering(595) 00:12:11.692 fused_ordering(596) 00:12:11.692 fused_ordering(597) 00:12:11.692 fused_ordering(598) 00:12:11.692 fused_ordering(599) 00:12:11.692 fused_ordering(600) 00:12:11.692 fused_ordering(601) 00:12:11.692 fused_ordering(602) 00:12:11.692 fused_ordering(603) 00:12:11.692 fused_ordering(604) 00:12:11.692 fused_ordering(605) 00:12:11.692 fused_ordering(606) 00:12:11.692 fused_ordering(607) 00:12:11.692 fused_ordering(608) 00:12:11.692 fused_ordering(609) 00:12:11.692 fused_ordering(610) 00:12:11.692 fused_ordering(611) 00:12:11.692 fused_ordering(612) 00:12:11.692 fused_ordering(613) 00:12:11.692 fused_ordering(614) 00:12:11.692 fused_ordering(615) 00:12:12.257 fused_ordering(616) 00:12:12.257 fused_ordering(617) 00:12:12.257 fused_ordering(618) 00:12:12.257 fused_ordering(619) 00:12:12.257 fused_ordering(620) 00:12:12.257 fused_ordering(621) 00:12:12.257 fused_ordering(622) 00:12:12.257 fused_ordering(623) 00:12:12.257 fused_ordering(624) 00:12:12.257 fused_ordering(625) 00:12:12.257 fused_ordering(626) 00:12:12.257 fused_ordering(627) 00:12:12.257 fused_ordering(628) 00:12:12.257 fused_ordering(629) 00:12:12.257 fused_ordering(630) 00:12:12.257 fused_ordering(631) 00:12:12.257 fused_ordering(632) 00:12:12.257 fused_ordering(633) 00:12:12.257 fused_ordering(634) 00:12:12.257 fused_ordering(635) 00:12:12.257 fused_ordering(636) 00:12:12.257 fused_ordering(637) 00:12:12.257 fused_ordering(638) 00:12:12.257 fused_ordering(639) 00:12:12.258 fused_ordering(640) 00:12:12.258 fused_ordering(641) 00:12:12.258 fused_ordering(642) 00:12:12.258 fused_ordering(643) 00:12:12.258 fused_ordering(644) 00:12:12.258 fused_ordering(645) 00:12:12.258 fused_ordering(646) 00:12:12.258 fused_ordering(647) 00:12:12.258 fused_ordering(648) 00:12:12.258 fused_ordering(649) 00:12:12.258 fused_ordering(650) 00:12:12.258 fused_ordering(651) 00:12:12.258 fused_ordering(652) 00:12:12.258 fused_ordering(653) 00:12:12.258 fused_ordering(654) 00:12:12.258 fused_ordering(655) 00:12:12.258 fused_ordering(656) 00:12:12.258 fused_ordering(657) 00:12:12.258 fused_ordering(658) 00:12:12.258 fused_ordering(659) 00:12:12.258 fused_ordering(660) 00:12:12.258 fused_ordering(661) 00:12:12.258 fused_ordering(662) 00:12:12.258 fused_ordering(663) 00:12:12.258 fused_ordering(664) 00:12:12.258 fused_ordering(665) 00:12:12.258 fused_ordering(666) 00:12:12.258 fused_ordering(667) 00:12:12.258 fused_ordering(668) 00:12:12.258 fused_ordering(669) 00:12:12.258 fused_ordering(670) 00:12:12.258 fused_ordering(671) 00:12:12.258 fused_ordering(672) 00:12:12.258 fused_ordering(673) 00:12:12.258 fused_ordering(674) 00:12:12.258 fused_ordering(675) 00:12:12.258 fused_ordering(676) 00:12:12.258 fused_ordering(677) 00:12:12.258 fused_ordering(678) 00:12:12.258 fused_ordering(679) 00:12:12.258 fused_ordering(680) 00:12:12.258 fused_ordering(681) 00:12:12.258 fused_ordering(682) 00:12:12.258 fused_ordering(683) 00:12:12.258 fused_ordering(684) 00:12:12.258 fused_ordering(685) 00:12:12.258 fused_ordering(686) 00:12:12.258 fused_ordering(687) 00:12:12.258 fused_ordering(688) 00:12:12.258 fused_ordering(689) 00:12:12.258 fused_ordering(690) 00:12:12.258 fused_ordering(691) 00:12:12.258 fused_ordering(692) 00:12:12.258 fused_ordering(693) 00:12:12.258 fused_ordering(694) 00:12:12.258 fused_ordering(695) 00:12:12.258 fused_ordering(696) 00:12:12.258 fused_ordering(697) 00:12:12.258 fused_ordering(698) 00:12:12.258 fused_ordering(699) 00:12:12.258 fused_ordering(700) 00:12:12.258 fused_ordering(701) 00:12:12.258 fused_ordering(702) 00:12:12.258 fused_ordering(703) 00:12:12.258 fused_ordering(704) 00:12:12.258 fused_ordering(705) 00:12:12.258 fused_ordering(706) 00:12:12.258 fused_ordering(707) 00:12:12.258 fused_ordering(708) 00:12:12.258 fused_ordering(709) 00:12:12.258 fused_ordering(710) 00:12:12.258 fused_ordering(711) 00:12:12.258 fused_ordering(712) 00:12:12.258 fused_ordering(713) 00:12:12.258 fused_ordering(714) 00:12:12.258 fused_ordering(715) 00:12:12.258 fused_ordering(716) 00:12:12.258 fused_ordering(717) 00:12:12.258 fused_ordering(718) 00:12:12.258 fused_ordering(719) 00:12:12.258 fused_ordering(720) 00:12:12.258 fused_ordering(721) 00:12:12.258 fused_ordering(722) 00:12:12.258 fused_ordering(723) 00:12:12.258 fused_ordering(724) 00:12:12.258 fused_ordering(725) 00:12:12.258 fused_ordering(726) 00:12:12.258 fused_ordering(727) 00:12:12.258 fused_ordering(728) 00:12:12.258 fused_ordering(729) 00:12:12.258 fused_ordering(730) 00:12:12.258 fused_ordering(731) 00:12:12.258 fused_ordering(732) 00:12:12.258 fused_ordering(733) 00:12:12.258 fused_ordering(734) 00:12:12.258 fused_ordering(735) 00:12:12.258 fused_ordering(736) 00:12:12.258 fused_ordering(737) 00:12:12.258 fused_ordering(738) 00:12:12.258 fused_ordering(739) 00:12:12.258 fused_ordering(740) 00:12:12.258 fused_ordering(741) 00:12:12.258 fused_ordering(742) 00:12:12.258 fused_ordering(743) 00:12:12.258 fused_ordering(744) 00:12:12.258 fused_ordering(745) 00:12:12.258 fused_ordering(746) 00:12:12.258 fused_ordering(747) 00:12:12.258 fused_ordering(748) 00:12:12.258 fused_ordering(749) 00:12:12.258 fused_ordering(750) 00:12:12.258 fused_ordering(751) 00:12:12.258 fused_ordering(752) 00:12:12.258 fused_ordering(753) 00:12:12.258 fused_ordering(754) 00:12:12.258 fused_ordering(755) 00:12:12.258 fused_ordering(756) 00:12:12.258 fused_ordering(757) 00:12:12.258 fused_ordering(758) 00:12:12.258 fused_ordering(759) 00:12:12.258 fused_ordering(760) 00:12:12.258 fused_ordering(761) 00:12:12.258 fused_ordering(762) 00:12:12.258 fused_ordering(763) 00:12:12.258 fused_ordering(764) 00:12:12.258 fused_ordering(765) 00:12:12.258 fused_ordering(766) 00:12:12.258 fused_ordering(767) 00:12:12.258 fused_ordering(768) 00:12:12.258 fused_ordering(769) 00:12:12.258 fused_ordering(770) 00:12:12.258 fused_ordering(771) 00:12:12.258 fused_ordering(772) 00:12:12.258 fused_ordering(773) 00:12:12.258 fused_ordering(774) 00:12:12.258 fused_ordering(775) 00:12:12.258 fused_ordering(776) 00:12:12.258 fused_ordering(777) 00:12:12.258 fused_ordering(778) 00:12:12.258 fused_ordering(779) 00:12:12.258 fused_ordering(780) 00:12:12.258 fused_ordering(781) 00:12:12.258 fused_ordering(782) 00:12:12.258 fused_ordering(783) 00:12:12.258 fused_ordering(784) 00:12:12.258 fused_ordering(785) 00:12:12.258 fused_ordering(786) 00:12:12.258 fused_ordering(787) 00:12:12.258 fused_ordering(788) 00:12:12.258 fused_ordering(789) 00:12:12.258 fused_ordering(790) 00:12:12.258 fused_ordering(791) 00:12:12.258 fused_ordering(792) 00:12:12.258 fused_ordering(793) 00:12:12.258 fused_ordering(794) 00:12:12.258 fused_ordering(795) 00:12:12.258 fused_ordering(796) 00:12:12.258 fused_ordering(797) 00:12:12.258 fused_ordering(798) 00:12:12.258 fused_ordering(799) 00:12:12.258 fused_ordering(800) 00:12:12.258 fused_ordering(801) 00:12:12.258 fused_ordering(802) 00:12:12.258 fused_ordering(803) 00:12:12.258 fused_ordering(804) 00:12:12.258 fused_ordering(805) 00:12:12.258 fused_ordering(806) 00:12:12.258 fused_ordering(807) 00:12:12.258 fused_ordering(808) 00:12:12.258 fused_ordering(809) 00:12:12.258 fused_ordering(810) 00:12:12.258 fused_ordering(811) 00:12:12.258 fused_ordering(812) 00:12:12.258 fused_ordering(813) 00:12:12.258 fused_ordering(814) 00:12:12.258 fused_ordering(815) 00:12:12.258 fused_ordering(816) 00:12:12.258 fused_ordering(817) 00:12:12.258 fused_ordering(818) 00:12:12.258 fused_ordering(819) 00:12:12.258 fused_ordering(820) 00:12:12.825 fused_o[2024-12-13 09:23:24.905105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104a340 is same with the state(6) to be set 00:12:12.825 rdering(821) 00:12:12.825 fused_ordering(822) 00:12:12.825 fused_ordering(823) 00:12:12.825 fused_ordering(824) 00:12:12.825 fused_ordering(825) 00:12:12.825 fused_ordering(826) 00:12:12.825 fused_ordering(827) 00:12:12.825 fused_ordering(828) 00:12:12.825 fused_ordering(829) 00:12:12.825 fused_ordering(830) 00:12:12.825 fused_ordering(831) 00:12:12.825 fused_ordering(832) 00:12:12.825 fused_ordering(833) 00:12:12.825 fused_ordering(834) 00:12:12.825 fused_ordering(835) 00:12:12.825 fused_ordering(836) 00:12:12.825 fused_ordering(837) 00:12:12.825 fused_ordering(838) 00:12:12.825 fused_ordering(839) 00:12:12.825 fused_ordering(840) 00:12:12.825 fused_ordering(841) 00:12:12.825 fused_ordering(842) 00:12:12.825 fused_ordering(843) 00:12:12.825 fused_ordering(844) 00:12:12.825 fused_ordering(845) 00:12:12.825 fused_ordering(846) 00:12:12.825 fused_ordering(847) 00:12:12.825 fused_ordering(848) 00:12:12.825 fused_ordering(849) 00:12:12.825 fused_ordering(850) 00:12:12.825 fused_ordering(851) 00:12:12.825 fused_ordering(852) 00:12:12.825 fused_ordering(853) 00:12:12.825 fused_ordering(854) 00:12:12.825 fused_ordering(855) 00:12:12.825 fused_ordering(856) 00:12:12.825 fused_ordering(857) 00:12:12.825 fused_ordering(858) 00:12:12.825 fused_ordering(859) 00:12:12.825 fused_ordering(860) 00:12:12.825 fused_ordering(861) 00:12:12.825 fused_ordering(862) 00:12:12.825 fused_ordering(863) 00:12:12.825 fused_ordering(864) 00:12:12.825 fused_ordering(865) 00:12:12.825 fused_ordering(866) 00:12:12.825 fused_ordering(867) 00:12:12.825 fused_ordering(868) 00:12:12.825 fused_ordering(869) 00:12:12.825 fused_ordering(870) 00:12:12.825 fused_ordering(871) 00:12:12.825 fused_ordering(872) 00:12:12.825 fused_ordering(873) 00:12:12.825 fused_ordering(874) 00:12:12.825 fused_ordering(875) 00:12:12.825 fused_ordering(876) 00:12:12.825 fused_ordering(877) 00:12:12.825 fused_ordering(878) 00:12:12.825 fused_ordering(879) 00:12:12.825 fused_ordering(880) 00:12:12.825 fused_ordering(881) 00:12:12.825 fused_ordering(882) 00:12:12.825 fused_ordering(883) 00:12:12.825 fused_ordering(884) 00:12:12.825 fused_ordering(885) 00:12:12.825 fused_ordering(886) 00:12:12.825 fused_ordering(887) 00:12:12.825 fused_ordering(888) 00:12:12.825 fused_ordering(889) 00:12:12.825 fused_ordering(890) 00:12:12.825 fused_ordering(891) 00:12:12.825 fused_ordering(892) 00:12:12.825 fused_ordering(893) 00:12:12.825 fused_ordering(894) 00:12:12.825 fused_ordering(895) 00:12:12.825 fused_ordering(896) 00:12:12.825 fused_ordering(897) 00:12:12.825 fused_ordering(898) 00:12:12.825 fused_ordering(899) 00:12:12.825 fused_ordering(900) 00:12:12.825 fused_ordering(901) 00:12:12.825 fused_ordering(902) 00:12:12.825 fused_ordering(903) 00:12:12.825 fused_ordering(904) 00:12:12.825 fused_ordering(905) 00:12:12.825 fused_ordering(906) 00:12:12.825 fused_ordering(907) 00:12:12.825 fused_ordering(908) 00:12:12.825 fused_ordering(909) 00:12:12.825 fused_ordering(910) 00:12:12.825 fused_ordering(911) 00:12:12.825 fused_ordering(912) 00:12:12.825 fused_ordering(913) 00:12:12.825 fused_ordering(914) 00:12:12.825 fused_ordering(915) 00:12:12.825 fused_ordering(916) 00:12:12.825 fused_ordering(917) 00:12:12.825 fused_ordering(918) 00:12:12.825 fused_ordering(919) 00:12:12.825 fused_ordering(920) 00:12:12.825 fused_ordering(921) 00:12:12.825 fused_ordering(922) 00:12:12.825 fused_ordering(923) 00:12:12.825 fused_ordering(924) 00:12:12.825 fused_ordering(925) 00:12:12.825 fused_ordering(926) 00:12:12.825 fused_ordering(927) 00:12:12.825 fused_ordering(928) 00:12:12.825 fused_ordering(929) 00:12:12.825 fused_ordering(930) 00:12:12.825 fused_ordering(931) 00:12:12.825 fused_ordering(932) 00:12:12.825 fused_ordering(933) 00:12:12.825 fused_ordering(934) 00:12:12.825 fused_ordering(935) 00:12:12.825 fused_ordering(936) 00:12:12.825 fused_ordering(937) 00:12:12.825 fused_ordering(938) 00:12:12.825 fused_ordering(939) 00:12:12.825 fused_ordering(940) 00:12:12.825 fused_ordering(941) 00:12:12.825 fused_ordering(942) 00:12:12.825 fused_ordering(943) 00:12:12.825 fused_ordering(944) 00:12:12.825 fused_ordering(945) 00:12:12.825 fused_ordering(946) 00:12:12.825 fused_ordering(947) 00:12:12.825 fused_ordering(948) 00:12:12.825 fused_ordering(949) 00:12:12.825 fused_ordering(950) 00:12:12.825 fused_ordering(951) 00:12:12.825 fused_ordering(952) 00:12:12.825 fused_ordering(953) 00:12:12.825 fused_ordering(954) 00:12:12.825 fused_ordering(955) 00:12:12.825 fused_ordering(956) 00:12:12.825 fused_ordering(957) 00:12:12.825 fused_ordering(958) 00:12:12.825 fused_ordering(959) 00:12:12.825 fused_ordering(960) 00:12:12.825 fused_ordering(961) 00:12:12.825 fused_ordering(962) 00:12:12.825 fused_ordering(963) 00:12:12.825 fused_ordering(964) 00:12:12.825 fused_ordering(965) 00:12:12.825 fused_ordering(966) 00:12:12.825 fused_ordering(967) 00:12:12.825 fused_ordering(968) 00:12:12.825 fused_ordering(969) 00:12:12.825 fused_ordering(970) 00:12:12.825 fused_ordering(971) 00:12:12.825 fused_ordering(972) 00:12:12.825 fused_ordering(973) 00:12:12.825 fused_ordering(974) 00:12:12.825 fused_ordering(975) 00:12:12.825 fused_ordering(976) 00:12:12.825 fused_ordering(977) 00:12:12.825 fused_ordering(978) 00:12:12.825 fused_ordering(979) 00:12:12.825 fused_ordering(980) 00:12:12.826 fused_ordering(981) 00:12:12.826 fused_ordering(982) 00:12:12.826 fused_ordering(983) 00:12:12.826 fused_ordering(984) 00:12:12.826 fused_ordering(985) 00:12:12.826 fused_ordering(986) 00:12:12.826 fused_ordering(987) 00:12:12.826 fused_ordering(988) 00:12:12.826 fused_ordering(989) 00:12:12.826 fused_ordering(990) 00:12:12.826 fused_ordering(991) 00:12:12.826 fused_ordering(992) 00:12:12.826 fused_ordering(993) 00:12:12.826 fused_ordering(994) 00:12:12.826 fused_ordering(995) 00:12:12.826 fused_ordering(996) 00:12:12.826 fused_ordering(997) 00:12:12.826 fused_ordering(998) 00:12:12.826 fused_ordering(999) 00:12:12.826 fused_ordering(1000) 00:12:12.826 fused_ordering(1001) 00:12:12.826 fused_ordering(1002) 00:12:12.826 fused_ordering(1003) 00:12:12.826 fused_ordering(1004) 00:12:12.826 fused_ordering(1005) 00:12:12.826 fused_ordering(1006) 00:12:12.826 fused_ordering(1007) 00:12:12.826 fused_ordering(1008) 00:12:12.826 fused_ordering(1009) 00:12:12.826 fused_ordering(1010) 00:12:12.826 fused_ordering(1011) 00:12:12.826 fused_ordering(1012) 00:12:12.826 fused_ordering(1013) 00:12:12.826 fused_ordering(1014) 00:12:12.826 fused_ordering(1015) 00:12:12.826 fused_ordering(1016) 00:12:12.826 fused_ordering(1017) 00:12:12.826 fused_ordering(1018) 00:12:12.826 fused_ordering(1019) 00:12:12.826 fused_ordering(1020) 00:12:12.826 fused_ordering(1021) 00:12:12.826 fused_ordering(1022) 00:12:12.826 fused_ordering(1023) 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:12.826 rmmod nvme_tcp 00:12:12.826 rmmod nvme_fabrics 00:12:12.826 rmmod nvme_keyring 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3274331 ']' 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3274331 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3274331 ']' 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3274331 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.826 09:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3274331 00:12:12.826 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:12.826 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:12.826 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3274331' 00:12:12.826 killing process with pid 3274331 00:12:12.826 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3274331 00:12:12.826 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3274331 00:12:12.826 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:12.826 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:12.826 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:12.826 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:12.826 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:12.826 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:12.826 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:13.084 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.084 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.084 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.084 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.084 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.986 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.986 00:12:14.986 real 0m10.197s 00:12:14.986 user 0m4.791s 00:12:14.986 sys 0m5.546s 00:12:14.986 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.986 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:14.986 ************************************ 00:12:14.986 END TEST nvmf_fused_ordering 00:12:14.986 ************************************ 00:12:14.986 09:23:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:14.986 09:23:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.986 09:23:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.986 09:23:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.986 ************************************ 00:12:14.986 START TEST nvmf_ns_masking 00:12:14.986 ************************************ 00:12:14.986 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:15.245 * Looking for test storage... 00:12:15.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:15.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.245 --rc genhtml_branch_coverage=1 00:12:15.245 --rc genhtml_function_coverage=1 00:12:15.245 --rc genhtml_legend=1 00:12:15.245 --rc geninfo_all_blocks=1 00:12:15.245 --rc geninfo_unexecuted_blocks=1 00:12:15.245 00:12:15.245 ' 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:15.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.245 --rc genhtml_branch_coverage=1 00:12:15.245 --rc genhtml_function_coverage=1 00:12:15.245 --rc genhtml_legend=1 00:12:15.245 --rc geninfo_all_blocks=1 00:12:15.245 --rc geninfo_unexecuted_blocks=1 00:12:15.245 00:12:15.245 ' 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:15.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.245 --rc genhtml_branch_coverage=1 00:12:15.245 --rc genhtml_function_coverage=1 00:12:15.245 --rc genhtml_legend=1 00:12:15.245 --rc geninfo_all_blocks=1 00:12:15.245 --rc geninfo_unexecuted_blocks=1 00:12:15.245 00:12:15.245 ' 00:12:15.245 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:15.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.245 --rc genhtml_branch_coverage=1 00:12:15.245 --rc genhtml_function_coverage=1 00:12:15.245 --rc genhtml_legend=1 00:12:15.245 --rc geninfo_all_blocks=1 00:12:15.246 --rc geninfo_unexecuted_blocks=1 00:12:15.246 00:12:15.246 ' 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9ef9538e-6ca0-4177-8c8f-0840f91536d1 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=783c2b4b-a1df-4d87-be7d-ae8919384af2 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6e6673c8-e709-4a0c-8e94-dd642a2beb45 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:15.246 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:20.508 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:20.508 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:20.508 Found net devices under 0000:af:00.0: cvl_0_0 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:20.508 Found net devices under 0000:af:00.1: cvl_0_1 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.508 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.509 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.509 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.509 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.509 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.509 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.509 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.767 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.767 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.767 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.767 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.767 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:12:20.767 00:12:20.767 --- 10.0.0.2 ping statistics --- 00:12:20.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.767 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:12:20.767 00:12:20.767 --- 10.0.0.1 ping statistics --- 00:12:20.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.767 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3278249 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3278249 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3278249 ']' 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.767 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:20.767 [2024-12-13 09:23:33.125210] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:12:20.767 [2024-12-13 09:23:33.125264] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.025 [2024-12-13 09:23:33.194203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.025 [2024-12-13 09:23:33.235476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.025 [2024-12-13 09:23:33.235511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.025 [2024-12-13 09:23:33.235518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.025 [2024-12-13 09:23:33.235525] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.025 [2024-12-13 09:23:33.235529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.025 [2024-12-13 09:23:33.236041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.025 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.025 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:21.025 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:21.025 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:21.025 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:21.025 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.025 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:21.283 [2024-12-13 09:23:33.536539] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.283 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:21.283 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:21.283 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:21.541 Malloc1 00:12:21.541 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:21.798 Malloc2 00:12:21.798 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:22.056 09:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:22.056 09:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.313 [2024-12-13 09:23:34.520359] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.313 09:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:22.313 09:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6e6673c8-e709-4a0c-8e94-dd642a2beb45 -a 10.0.0.2 -s 4420 -i 4 00:12:22.570 09:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.571 09:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:22.571 09:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.571 09:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:22.571 09:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:24.468 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:24.468 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:24.468 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.468 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:24.468 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.468 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:24.468 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:24.468 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:24.468 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:24.468 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:24.468 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:24.468 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:24.468 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:24.726 [ 0]:0x1 00:12:24.726 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:24.726 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:24.726 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=71ef0183be8242db94df1a8843e51322 00:12:24.726 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 71ef0183be8242db94df1a8843e51322 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:24.726 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:24.984 [ 0]:0x1 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=71ef0183be8242db94df1a8843e51322 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 71ef0183be8242db94df1a8843e51322 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:24.984 [ 1]:0x2 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75121c62babe4514bcdbb7426196bc70 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75121c62babe4514bcdbb7426196bc70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.984 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.241 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:25.499 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:25.499 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6e6673c8-e709-4a0c-8e94-dd642a2beb45 -a 10.0.0.2 -s 4420 -i 4 00:12:25.499 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:25.499 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:25.499 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.499 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:25.499 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:25.499 09:23:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:28.025 [ 0]:0x2 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75121c62babe4514bcdbb7426196bc70 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75121c62babe4514bcdbb7426196bc70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:28.025 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:28.025 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:28.025 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:28.025 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:28.025 [ 0]:0x1 00:12:28.025 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:28.025 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:28.025 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=71ef0183be8242db94df1a8843e51322 00:12:28.025 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 71ef0183be8242db94df1a8843e51322 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:28.025 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:28.025 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:28.025 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:28.025 [ 1]:0x2 00:12:28.025 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:28.025 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:28.025 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75121c62babe4514bcdbb7426196bc70 00:12:28.025 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75121c62babe4514bcdbb7426196bc70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:28.025 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:28.283 [ 0]:0x2 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75121c62babe4514bcdbb7426196bc70 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75121c62babe4514bcdbb7426196bc70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.283 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:28.541 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:28.541 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6e6673c8-e709-4a0c-8e94-dd642a2beb45 -a 10.0.0.2 -s 4420 -i 4 00:12:28.798 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:28.798 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:28.798 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.798 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:28.798 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:28.798 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:30.697 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:30.697 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:30.697 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.697 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:30.697 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.697 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:30.697 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:30.697 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:30.697 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:30.697 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:30.697 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:30.697 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:30.697 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:30.697 [ 0]:0x1 00:12:30.697 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:30.697 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:30.955 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=71ef0183be8242db94df1a8843e51322 00:12:30.955 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 71ef0183be8242db94df1a8843e51322 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:30.955 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:30.955 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:30.955 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:30.955 [ 1]:0x2 00:12:30.955 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:30.955 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:30.955 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75121c62babe4514bcdbb7426196bc70 00:12:30.955 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75121c62babe4514bcdbb7426196bc70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:30.955 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:30.955 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:30.955 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:30.955 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:30.955 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:31.212 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:31.212 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:31.212 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:31.212 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:31.212 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:31.213 [ 0]:0x2 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75121c62babe4514bcdbb7426196bc70 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75121c62babe4514bcdbb7426196bc70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:31.213 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:31.471 [2024-12-13 09:23:43.606076] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:31.471 request: 00:12:31.471 { 00:12:31.471 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.471 "nsid": 2, 00:12:31.471 "host": "nqn.2016-06.io.spdk:host1", 00:12:31.471 "method": "nvmf_ns_remove_host", 00:12:31.471 "req_id": 1 00:12:31.471 } 00:12:31.471 Got JSON-RPC error response 00:12:31.471 response: 00:12:31.471 { 00:12:31.471 "code": -32602, 00:12:31.471 "message": "Invalid parameters" 00:12:31.471 } 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.471 [ 0]:0x2 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75121c62babe4514bcdbb7426196bc70 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75121c62babe4514bcdbb7426196bc70 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3280192 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3280192 /var/tmp/host.sock 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3280192 ']' 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:31.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.471 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:31.471 [2024-12-13 09:23:43.832558] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:12:31.471 [2024-12-13 09:23:43.832604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280192 ] 00:12:31.729 [2024-12-13 09:23:43.894968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.729 [2024-12-13 09:23:43.933782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.986 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.986 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:31.986 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.986 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:32.243 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9ef9538e-6ca0-4177-8c8f-0840f91536d1 00:12:32.243 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:32.243 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9EF9538E6CA041778C8F0840F91536D1 -i 00:12:32.500 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 783c2b4b-a1df-4d87-be7d-ae8919384af2 00:12:32.500 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:32.500 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 783C2B4BA1DF4D87BE7DAE8919384AF2 -i 00:12:32.758 09:23:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:32.758 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:33.048 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:33.048 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:33.362 nvme0n1 00:12:33.362 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:33.362 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:33.676 nvme1n2 00:12:33.960 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:33.960 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:33.960 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:33.960 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:33.960 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:33.960 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:33.960 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:33.960 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:33.960 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:34.217 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9ef9538e-6ca0-4177-8c8f-0840f91536d1 == \9\e\f\9\5\3\8\e\-\6\c\a\0\-\4\1\7\7\-\8\c\8\f\-\0\8\4\0\f\9\1\5\3\6\d\1 ]] 00:12:34.217 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:34.217 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:34.217 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:34.473 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 783c2b4b-a1df-4d87-be7d-ae8919384af2 == \7\8\3\c\2\b\4\b\-\a\1\d\f\-\4\d\8\7\-\b\e\7\d\-\a\e\8\9\1\9\3\8\4\a\f\2 ]] 00:12:34.473 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.473 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:34.729 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 9ef9538e-6ca0-4177-8c8f-0840f91536d1 00:12:34.729 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:34.729 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9EF9538E6CA041778C8F0840F91536D1 00:12:34.729 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:34.729 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9EF9538E6CA041778C8F0840F91536D1 00:12:34.729 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:34.729 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:34.729 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:34.729 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:34.729 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:34.729 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:34.729 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:34.729 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:34.729 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9EF9538E6CA041778C8F0840F91536D1 00:12:34.986 [2024-12-13 09:23:47.167821] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:34.986 [2024-12-13 09:23:47.167849] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:34.986 [2024-12-13 09:23:47.167858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.986 request: 00:12:34.986 { 00:12:34.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:34.986 "namespace": { 00:12:34.986 "bdev_name": "invalid", 00:12:34.986 "nsid": 1, 00:12:34.986 "nguid": "9EF9538E6CA041778C8F0840F91536D1", 00:12:34.986 "no_auto_visible": false, 00:12:34.986 "hide_metadata": false 00:12:34.986 }, 00:12:34.986 "method": "nvmf_subsystem_add_ns", 00:12:34.986 "req_id": 1 00:12:34.986 } 00:12:34.986 Got JSON-RPC error response 00:12:34.986 response: 00:12:34.986 { 00:12:34.986 "code": -32602, 00:12:34.986 "message": "Invalid parameters" 00:12:34.986 } 00:12:34.986 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:34.986 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:34.986 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:34.986 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:34.986 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 9ef9538e-6ca0-4177-8c8f-0840f91536d1 00:12:34.986 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:34.986 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9EF9538E6CA041778C8F0840F91536D1 -i 00:12:35.243 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:37.138 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:37.138 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:37.138 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:37.395 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:37.395 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3280192 00:12:37.395 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3280192 ']' 00:12:37.395 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3280192 00:12:37.395 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:37.395 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.395 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3280192 00:12:37.395 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:37.395 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:37.395 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3280192' 00:12:37.395 killing process with pid 3280192 00:12:37.395 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3280192 00:12:37.395 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3280192 00:12:37.653 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.911 rmmod nvme_tcp 00:12:37.911 rmmod nvme_fabrics 00:12:37.911 rmmod nvme_keyring 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3278249 ']' 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3278249 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3278249 ']' 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3278249 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3278249 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3278249' 00:12:37.911 killing process with pid 3278249 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3278249 00:12:37.911 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3278249 00:12:38.168 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:38.168 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:38.168 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:38.168 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:38.168 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:38.168 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:38.168 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:38.168 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:38.168 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:38.168 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.168 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.168 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:40.696 00:12:40.696 real 0m25.172s 00:12:40.696 user 0m30.253s 00:12:40.696 sys 0m6.666s 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:40.696 ************************************ 00:12:40.696 END TEST nvmf_ns_masking 00:12:40.696 ************************************ 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:40.696 ************************************ 00:12:40.696 START TEST nvmf_nvme_cli 00:12:40.696 ************************************ 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:40.696 * Looking for test storage... 00:12:40.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.696 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:40.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.697 --rc genhtml_branch_coverage=1 00:12:40.697 --rc genhtml_function_coverage=1 00:12:40.697 --rc genhtml_legend=1 00:12:40.697 --rc geninfo_all_blocks=1 00:12:40.697 --rc geninfo_unexecuted_blocks=1 00:12:40.697 00:12:40.697 ' 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:40.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.697 --rc genhtml_branch_coverage=1 00:12:40.697 --rc genhtml_function_coverage=1 00:12:40.697 --rc genhtml_legend=1 00:12:40.697 --rc geninfo_all_blocks=1 00:12:40.697 --rc geninfo_unexecuted_blocks=1 00:12:40.697 00:12:40.697 ' 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:40.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.697 --rc genhtml_branch_coverage=1 00:12:40.697 --rc genhtml_function_coverage=1 00:12:40.697 --rc genhtml_legend=1 00:12:40.697 --rc geninfo_all_blocks=1 00:12:40.697 --rc geninfo_unexecuted_blocks=1 00:12:40.697 00:12:40.697 ' 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:40.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.697 --rc genhtml_branch_coverage=1 00:12:40.697 --rc genhtml_function_coverage=1 00:12:40.697 --rc genhtml_legend=1 00:12:40.697 --rc geninfo_all_blocks=1 00:12:40.697 --rc geninfo_unexecuted_blocks=1 00:12:40.697 00:12:40.697 ' 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:40.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:40.697 09:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:45.954 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.954 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:45.954 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:45.954 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:45.954 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:45.954 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:45.954 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:45.954 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:45.954 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:45.955 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:45.955 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:45.955 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:45.955 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:45.955 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:45.955 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:45.955 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:45.955 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:45.955 Found net devices under 0000:af:00.0: cvl_0_0 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:45.955 Found net devices under 0000:af:00.1: cvl_0_1 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:12:45.955 00:12:45.955 --- 10.0.0.2 ping statistics --- 00:12:45.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.955 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:12:45.955 00:12:45.955 --- 10.0.0.1 ping statistics --- 00:12:45.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.955 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:45.955 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:45.956 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.956 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:45.956 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:45.956 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3284612 00:12:45.956 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.956 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3284612 00:12:45.956 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3284612 ']' 00:12:45.956 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.956 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.956 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.956 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.956 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:45.956 [2024-12-13 09:23:58.319726] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:12:45.956 [2024-12-13 09:23:58.319772] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.213 [2024-12-13 09:23:58.386944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.213 [2024-12-13 09:23:58.430765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.213 [2024-12-13 09:23:58.430800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.213 [2024-12-13 09:23:58.430807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.213 [2024-12-13 09:23:58.430813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.213 [2024-12-13 09:23:58.430818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.213 [2024-12-13 09:23:58.432249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.213 [2024-12-13 09:23:58.432345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.213 [2024-12-13 09:23:58.432413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.213 [2024-12-13 09:23:58.432415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.213 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.213 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:12:46.213 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.213 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:46.213 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.214 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.214 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:46.214 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.214 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.214 [2024-12-13 09:23:58.567159] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.214 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.214 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:46.214 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.214 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.471 Malloc0 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.471 Malloc1 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.471 [2024-12-13 09:23:58.648962] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:46.471 00:12:46.471 Discovery Log Number of Records 2, Generation counter 2 00:12:46.471 =====Discovery Log Entry 0====== 00:12:46.471 trtype: tcp 00:12:46.471 adrfam: ipv4 00:12:46.471 subtype: current discovery subsystem 00:12:46.471 treq: not required 00:12:46.471 portid: 0 00:12:46.471 trsvcid: 4420 00:12:46.471 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:46.471 traddr: 10.0.0.2 00:12:46.471 eflags: explicit discovery connections, duplicate discovery information 00:12:46.471 sectype: none 00:12:46.471 =====Discovery Log Entry 1====== 00:12:46.471 trtype: tcp 00:12:46.471 adrfam: ipv4 00:12:46.471 subtype: nvme subsystem 00:12:46.471 treq: not required 00:12:46.471 portid: 0 00:12:46.471 trsvcid: 4420 00:12:46.471 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:46.471 traddr: 10.0.0.2 00:12:46.471 eflags: none 00:12:46.471 sectype: none 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:46.471 09:23:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.842 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:47.842 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:12:47.842 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.842 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:47.842 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:47.842 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:12:49.738 09:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:49.738 09:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:49.738 09:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.738 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:49.738 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.738 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:12:49.738 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:49.738 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:49.738 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:49.738 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:49.738 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:49.739 /dev/nvme0n2 ]] 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:49.739 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:49.997 rmmod nvme_tcp 00:12:49.997 rmmod nvme_fabrics 00:12:49.997 rmmod nvme_keyring 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3284612 ']' 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3284612 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3284612 ']' 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3284612 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3284612 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3284612' 00:12:49.997 killing process with pid 3284612 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3284612 00:12:49.997 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3284612 00:12:50.255 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:50.255 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:50.255 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:50.255 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:12:50.255 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:12:50.255 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:50.255 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:12:50.255 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:50.255 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:50.255 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.255 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.255 09:24:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:52.787 00:12:52.787 real 0m11.991s 00:12:52.787 user 0m17.841s 00:12:52.787 sys 0m4.646s 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.787 ************************************ 00:12:52.787 END TEST nvmf_nvme_cli 00:12:52.787 ************************************ 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.787 ************************************ 00:12:52.787 START TEST nvmf_vfio_user 00:12:52.787 ************************************ 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:52.787 * Looking for test storage... 00:12:52.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:52.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.787 --rc genhtml_branch_coverage=1 00:12:52.787 --rc genhtml_function_coverage=1 00:12:52.787 --rc genhtml_legend=1 00:12:52.787 --rc geninfo_all_blocks=1 00:12:52.787 --rc geninfo_unexecuted_blocks=1 00:12:52.787 00:12:52.787 ' 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:52.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.787 --rc genhtml_branch_coverage=1 00:12:52.787 --rc genhtml_function_coverage=1 00:12:52.787 --rc genhtml_legend=1 00:12:52.787 --rc geninfo_all_blocks=1 00:12:52.787 --rc geninfo_unexecuted_blocks=1 00:12:52.787 00:12:52.787 ' 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:52.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.787 --rc genhtml_branch_coverage=1 00:12:52.787 --rc genhtml_function_coverage=1 00:12:52.787 --rc genhtml_legend=1 00:12:52.787 --rc geninfo_all_blocks=1 00:12:52.787 --rc geninfo_unexecuted_blocks=1 00:12:52.787 00:12:52.787 ' 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:52.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.787 --rc genhtml_branch_coverage=1 00:12:52.787 --rc genhtml_function_coverage=1 00:12:52.787 --rc genhtml_legend=1 00:12:52.787 --rc geninfo_all_blocks=1 00:12:52.787 --rc geninfo_unexecuted_blocks=1 00:12:52.787 00:12:52.787 ' 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.787 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:52.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3285995 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3285995' 00:12:52.788 Process pid: 3285995 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3285995 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3285995 ']' 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.788 09:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:52.788 [2024-12-13 09:24:04.872941] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:12:52.788 [2024-12-13 09:24:04.872990] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.788 [2024-12-13 09:24:04.935653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.788 [2024-12-13 09:24:04.977439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.788 [2024-12-13 09:24:04.977491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.788 [2024-12-13 09:24:04.977499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.788 [2024-12-13 09:24:04.977505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.788 [2024-12-13 09:24:04.977510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.788 [2024-12-13 09:24:04.978906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.788 [2024-12-13 09:24:04.979002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.788 [2024-12-13 09:24:04.979092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.788 [2024-12-13 09:24:04.979093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.788 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.788 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:12:52.788 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:53.719 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:53.976 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:53.976 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:53.976 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:53.976 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:53.976 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:54.233 Malloc1 00:12:54.233 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:54.491 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:54.749 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:55.006 09:24:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:55.006 09:24:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:55.006 09:24:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:55.006 Malloc2 00:12:55.006 09:24:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:55.263 09:24:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:55.520 09:24:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:55.779 09:24:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:55.779 09:24:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:55.779 09:24:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:55.779 09:24:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:55.779 09:24:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:55.779 09:24:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:55.779 [2024-12-13 09:24:07.962215] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:12:55.779 [2024-12-13 09:24:07.962253] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286475 ] 00:12:55.779 [2024-12-13 09:24:08.003488] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:55.779 [2024-12-13 09:24:08.005838] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:55.779 [2024-12-13 09:24:08.005861] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe5988bf000 00:12:55.779 [2024-12-13 09:24:08.006838] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:55.779 [2024-12-13 09:24:08.007842] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:55.779 [2024-12-13 09:24:08.008850] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:55.779 [2024-12-13 09:24:08.009857] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:55.779 [2024-12-13 09:24:08.010862] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:55.779 [2024-12-13 09:24:08.011873] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:55.779 [2024-12-13 09:24:08.012893] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:55.779 [2024-12-13 09:24:08.013886] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:55.779 [2024-12-13 09:24:08.014893] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:55.779 [2024-12-13 09:24:08.014903] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe5988b4000 00:12:55.779 [2024-12-13 09:24:08.015820] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:55.779 [2024-12-13 09:24:08.028738] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:55.779 [2024-12-13 09:24:08.028763] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:12:55.779 [2024-12-13 09:24:08.034005] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:55.779 [2024-12-13 09:24:08.034043] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:55.779 [2024-12-13 09:24:08.034113] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:12:55.779 [2024-12-13 09:24:08.034129] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:12:55.779 [2024-12-13 09:24:08.034135] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:12:55.779 [2024-12-13 09:24:08.035004] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:55.779 [2024-12-13 09:24:08.035014] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:12:55.779 [2024-12-13 09:24:08.035023] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:12:55.779 [2024-12-13 09:24:08.036006] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:55.779 [2024-12-13 09:24:08.036015] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:12:55.779 [2024-12-13 09:24:08.036021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:55.780 [2024-12-13 09:24:08.037017] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:55.780 [2024-12-13 09:24:08.037027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:55.780 [2024-12-13 09:24:08.038021] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:55.780 [2024-12-13 09:24:08.038029] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:55.780 [2024-12-13 09:24:08.038033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:55.780 [2024-12-13 09:24:08.038039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:55.780 [2024-12-13 09:24:08.038147] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:12:55.780 [2024-12-13 09:24:08.038151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:55.780 [2024-12-13 09:24:08.038156] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:55.780 [2024-12-13 09:24:08.039030] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:55.780 [2024-12-13 09:24:08.040037] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:55.780 [2024-12-13 09:24:08.041041] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:55.780 [2024-12-13 09:24:08.042038] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:55.780 [2024-12-13 09:24:08.042105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:55.780 [2024-12-13 09:24:08.043049] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:55.780 [2024-12-13 09:24:08.043058] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:55.780 [2024-12-13 09:24:08.043062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043079] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:12:55.780 [2024-12-13 09:24:08.043086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043102] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:55.780 [2024-12-13 09:24:08.043107] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:55.780 [2024-12-13 09:24:08.043112] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:55.780 [2024-12-13 09:24:08.043124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:55.780 [2024-12-13 09:24:08.043171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:55.780 [2024-12-13 09:24:08.043179] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:12:55.780 [2024-12-13 09:24:08.043184] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:12:55.780 [2024-12-13 09:24:08.043187] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:12:55.780 [2024-12-13 09:24:08.043191] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:55.780 [2024-12-13 09:24:08.043196] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:12:55.780 [2024-12-13 09:24:08.043199] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:12:55.780 [2024-12-13 09:24:08.043204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:55.780 [2024-12-13 09:24:08.043234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:55.780 [2024-12-13 09:24:08.043244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.780 [2024-12-13 09:24:08.043251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.780 [2024-12-13 09:24:08.043259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.780 [2024-12-13 09:24:08.043266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.780 [2024-12-13 09:24:08.043270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:55.780 [2024-12-13 09:24:08.043294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:55.780 [2024-12-13 09:24:08.043299] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:12:55.780 [2024-12-13 09:24:08.043304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:55.780 [2024-12-13 09:24:08.043335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:55.780 [2024-12-13 09:24:08.043383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043397] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:55.780 [2024-12-13 09:24:08.043401] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:55.780 [2024-12-13 09:24:08.043404] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:55.780 [2024-12-13 09:24:08.043409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:55.780 [2024-12-13 09:24:08.043422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:55.780 [2024-12-13 09:24:08.043432] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:12:55.780 [2024-12-13 09:24:08.043442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043462] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:55.780 [2024-12-13 09:24:08.043465] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:55.780 [2024-12-13 09:24:08.043468] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:55.780 [2024-12-13 09:24:08.043474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:55.780 [2024-12-13 09:24:08.043497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:55.780 [2024-12-13 09:24:08.043506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043519] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:55.780 [2024-12-13 09:24:08.043523] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:55.780 [2024-12-13 09:24:08.043526] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:55.780 [2024-12-13 09:24:08.043531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:55.780 [2024-12-13 09:24:08.043542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:55.780 [2024-12-13 09:24:08.043551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043583] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:55.780 [2024-12-13 09:24:08.043587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:12:55.780 [2024-12-13 09:24:08.043592] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:12:55.781 [2024-12-13 09:24:08.043609] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:55.781 [2024-12-13 09:24:08.043618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:55.781 [2024-12-13 09:24:08.043628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:55.781 [2024-12-13 09:24:08.043638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:55.781 [2024-12-13 09:24:08.043648] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:55.781 [2024-12-13 09:24:08.043656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:55.781 [2024-12-13 09:24:08.043666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:55.781 [2024-12-13 09:24:08.043678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:55.781 [2024-12-13 09:24:08.043690] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:55.781 [2024-12-13 09:24:08.043694] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:55.781 [2024-12-13 09:24:08.043697] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:55.781 [2024-12-13 09:24:08.043700] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:55.781 [2024-12-13 09:24:08.043703] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:55.781 [2024-12-13 09:24:08.043708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:55.781 [2024-12-13 09:24:08.043714] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:55.781 [2024-12-13 09:24:08.043718] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:55.781 [2024-12-13 09:24:08.043721] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:55.781 [2024-12-13 09:24:08.043726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:55.781 [2024-12-13 09:24:08.043732] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:55.781 [2024-12-13 09:24:08.043736] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:55.781 [2024-12-13 09:24:08.043740] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:55.781 [2024-12-13 09:24:08.043746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:55.781 [2024-12-13 09:24:08.043752] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:55.781 [2024-12-13 09:24:08.043756] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:55.781 [2024-12-13 09:24:08.043759] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:55.781 [2024-12-13 09:24:08.043764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:55.781 [2024-12-13 09:24:08.043770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:55.781 [2024-12-13 09:24:08.043781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:55.781 [2024-12-13 09:24:08.043790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:55.781 [2024-12-13 09:24:08.043796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:55.781 ===================================================== 00:12:55.781 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:55.781 ===================================================== 00:12:55.781 Controller Capabilities/Features 00:12:55.781 ================================ 00:12:55.781 Vendor ID: 4e58 00:12:55.781 Subsystem Vendor ID: 4e58 00:12:55.781 Serial Number: SPDK1 00:12:55.781 Model Number: SPDK bdev Controller 00:12:55.781 Firmware Version: 25.01 00:12:55.781 Recommended Arb Burst: 6 00:12:55.781 IEEE OUI Identifier: 8d 6b 50 00:12:55.781 Multi-path I/O 00:12:55.781 May have multiple subsystem ports: Yes 00:12:55.781 May have multiple controllers: Yes 00:12:55.781 Associated with SR-IOV VF: No 00:12:55.781 Max Data Transfer Size: 131072 00:12:55.781 Max Number of Namespaces: 32 00:12:55.781 Max Number of I/O Queues: 127 00:12:55.781 NVMe Specification Version (VS): 1.3 00:12:55.781 NVMe Specification Version (Identify): 1.3 00:12:55.781 Maximum Queue Entries: 256 00:12:55.781 Contiguous Queues Required: Yes 00:12:55.781 Arbitration Mechanisms Supported 00:12:55.781 Weighted Round Robin: Not Supported 00:12:55.781 Vendor Specific: Not Supported 00:12:55.781 Reset Timeout: 15000 ms 00:12:55.781 Doorbell Stride: 4 bytes 00:12:55.781 NVM Subsystem Reset: Not Supported 00:12:55.781 Command Sets Supported 00:12:55.781 NVM Command Set: Supported 00:12:55.781 Boot Partition: Not Supported 00:12:55.781 Memory Page Size Minimum: 4096 bytes 00:12:55.781 Memory Page Size Maximum: 4096 bytes 00:12:55.781 Persistent Memory Region: Not Supported 00:12:55.781 Optional Asynchronous Events Supported 00:12:55.781 Namespace Attribute Notices: Supported 00:12:55.781 Firmware Activation Notices: Not Supported 00:12:55.781 ANA Change Notices: Not Supported 00:12:55.781 PLE Aggregate Log Change Notices: Not Supported 00:12:55.781 LBA Status Info Alert Notices: Not Supported 00:12:55.781 EGE Aggregate Log Change Notices: Not Supported 00:12:55.781 Normal NVM Subsystem Shutdown event: Not Supported 00:12:55.781 Zone Descriptor Change Notices: Not Supported 00:12:55.781 Discovery Log Change Notices: Not Supported 00:12:55.781 Controller Attributes 00:12:55.781 128-bit Host Identifier: Supported 00:12:55.781 Non-Operational Permissive Mode: Not Supported 00:12:55.781 NVM Sets: Not Supported 00:12:55.781 Read Recovery Levels: Not Supported 00:12:55.781 Endurance Groups: Not Supported 00:12:55.781 Predictable Latency Mode: Not Supported 00:12:55.781 Traffic Based Keep ALive: Not Supported 00:12:55.781 Namespace Granularity: Not Supported 00:12:55.781 SQ Associations: Not Supported 00:12:55.781 UUID List: Not Supported 00:12:55.781 Multi-Domain Subsystem: Not Supported 00:12:55.781 Fixed Capacity Management: Not Supported 00:12:55.781 Variable Capacity Management: Not Supported 00:12:55.781 Delete Endurance Group: Not Supported 00:12:55.781 Delete NVM Set: Not Supported 00:12:55.781 Extended LBA Formats Supported: Not Supported 00:12:55.781 Flexible Data Placement Supported: Not Supported 00:12:55.781 00:12:55.781 Controller Memory Buffer Support 00:12:55.781 ================================ 00:12:55.781 Supported: No 00:12:55.781 00:12:55.781 Persistent Memory Region Support 00:12:55.781 ================================ 00:12:55.781 Supported: No 00:12:55.781 00:12:55.781 Admin Command Set Attributes 00:12:55.781 ============================ 00:12:55.781 Security Send/Receive: Not Supported 00:12:55.781 Format NVM: Not Supported 00:12:55.781 Firmware Activate/Download: Not Supported 00:12:55.781 Namespace Management: Not Supported 00:12:55.781 Device Self-Test: Not Supported 00:12:55.781 Directives: Not Supported 00:12:55.781 NVMe-MI: Not Supported 00:12:55.781 Virtualization Management: Not Supported 00:12:55.781 Doorbell Buffer Config: Not Supported 00:12:55.781 Get LBA Status Capability: Not Supported 00:12:55.781 Command & Feature Lockdown Capability: Not Supported 00:12:55.781 Abort Command Limit: 4 00:12:55.781 Async Event Request Limit: 4 00:12:55.781 Number of Firmware Slots: N/A 00:12:55.781 Firmware Slot 1 Read-Only: N/A 00:12:55.781 Firmware Activation Without Reset: N/A 00:12:55.781 Multiple Update Detection Support: N/A 00:12:55.781 Firmware Update Granularity: No Information Provided 00:12:55.781 Per-Namespace SMART Log: No 00:12:55.781 Asymmetric Namespace Access Log Page: Not Supported 00:12:55.781 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:55.781 Command Effects Log Page: Supported 00:12:55.781 Get Log Page Extended Data: Supported 00:12:55.781 Telemetry Log Pages: Not Supported 00:12:55.781 Persistent Event Log Pages: Not Supported 00:12:55.781 Supported Log Pages Log Page: May Support 00:12:55.781 Commands Supported & Effects Log Page: Not Supported 00:12:55.781 Feature Identifiers & Effects Log Page:May Support 00:12:55.781 NVMe-MI Commands & Effects Log Page: May Support 00:12:55.781 Data Area 4 for Telemetry Log: Not Supported 00:12:55.781 Error Log Page Entries Supported: 128 00:12:55.781 Keep Alive: Supported 00:12:55.781 Keep Alive Granularity: 10000 ms 00:12:55.781 00:12:55.781 NVM Command Set Attributes 00:12:55.781 ========================== 00:12:55.781 Submission Queue Entry Size 00:12:55.781 Max: 64 00:12:55.781 Min: 64 00:12:55.781 Completion Queue Entry Size 00:12:55.781 Max: 16 00:12:55.781 Min: 16 00:12:55.781 Number of Namespaces: 32 00:12:55.781 Compare Command: Supported 00:12:55.781 Write Uncorrectable Command: Not Supported 00:12:55.781 Dataset Management Command: Supported 00:12:55.781 Write Zeroes Command: Supported 00:12:55.781 Set Features Save Field: Not Supported 00:12:55.781 Reservations: Not Supported 00:12:55.781 Timestamp: Not Supported 00:12:55.781 Copy: Supported 00:12:55.781 Volatile Write Cache: Present 00:12:55.781 Atomic Write Unit (Normal): 1 00:12:55.782 Atomic Write Unit (PFail): 1 00:12:55.782 Atomic Compare & Write Unit: 1 00:12:55.782 Fused Compare & Write: Supported 00:12:55.782 Scatter-Gather List 00:12:55.782 SGL Command Set: Supported (Dword aligned) 00:12:55.782 SGL Keyed: Not Supported 00:12:55.782 SGL Bit Bucket Descriptor: Not Supported 00:12:55.782 SGL Metadata Pointer: Not Supported 00:12:55.782 Oversized SGL: Not Supported 00:12:55.782 SGL Metadata Address: Not Supported 00:12:55.782 SGL Offset: Not Supported 00:12:55.782 Transport SGL Data Block: Not Supported 00:12:55.782 Replay Protected Memory Block: Not Supported 00:12:55.782 00:12:55.782 Firmware Slot Information 00:12:55.782 ========================= 00:12:55.782 Active slot: 1 00:12:55.782 Slot 1 Firmware Revision: 25.01 00:12:55.782 00:12:55.782 00:12:55.782 Commands Supported and Effects 00:12:55.782 ============================== 00:12:55.782 Admin Commands 00:12:55.782 -------------- 00:12:55.782 Get Log Page (02h): Supported 00:12:55.782 Identify (06h): Supported 00:12:55.782 Abort (08h): Supported 00:12:55.782 Set Features (09h): Supported 00:12:55.782 Get Features (0Ah): Supported 00:12:55.782 Asynchronous Event Request (0Ch): Supported 00:12:55.782 Keep Alive (18h): Supported 00:12:55.782 I/O Commands 00:12:55.782 ------------ 00:12:55.782 Flush (00h): Supported LBA-Change 00:12:55.782 Write (01h): Supported LBA-Change 00:12:55.782 Read (02h): Supported 00:12:55.782 Compare (05h): Supported 00:12:55.782 Write Zeroes (08h): Supported LBA-Change 00:12:55.782 Dataset Management (09h): Supported LBA-Change 00:12:55.782 Copy (19h): Supported LBA-Change 00:12:55.782 00:12:55.782 Error Log 00:12:55.782 ========= 00:12:55.782 00:12:55.782 Arbitration 00:12:55.782 =========== 00:12:55.782 Arbitration Burst: 1 00:12:55.782 00:12:55.782 Power Management 00:12:55.782 ================ 00:12:55.782 Number of Power States: 1 00:12:55.782 Current Power State: Power State #0 00:12:55.782 Power State #0: 00:12:55.782 Max Power: 0.00 W 00:12:55.782 Non-Operational State: Operational 00:12:55.782 Entry Latency: Not Reported 00:12:55.782 Exit Latency: Not Reported 00:12:55.782 Relative Read Throughput: 0 00:12:55.782 Relative Read Latency: 0 00:12:55.782 Relative Write Throughput: 0 00:12:55.782 Relative Write Latency: 0 00:12:55.782 Idle Power: Not Reported 00:12:55.782 Active Power: Not Reported 00:12:55.782 Non-Operational Permissive Mode: Not Supported 00:12:55.782 00:12:55.782 Health Information 00:12:55.782 ================== 00:12:55.782 Critical Warnings: 00:12:55.782 Available Spare Space: OK 00:12:55.782 Temperature: OK 00:12:55.782 Device Reliability: OK 00:12:55.782 Read Only: No 00:12:55.782 Volatile Memory Backup: OK 00:12:55.782 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:55.782 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:55.782 Available Spare: 0% 00:12:55.782 Available Sp[2024-12-13 09:24:08.043880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:55.782 [2024-12-13 09:24:08.043890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:55.782 [2024-12-13 09:24:08.043917] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:12:55.782 [2024-12-13 09:24:08.043926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.782 [2024-12-13 09:24:08.043931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.782 [2024-12-13 09:24:08.043937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.782 [2024-12-13 09:24:08.043942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.782 [2024-12-13 09:24:08.044053] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:55.782 [2024-12-13 09:24:08.044062] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:55.782 [2024-12-13 09:24:08.045081] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:55.782 [2024-12-13 09:24:08.045134] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:12:55.782 [2024-12-13 09:24:08.045141] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:12:55.782 [2024-12-13 09:24:08.046070] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:55.782 [2024-12-13 09:24:08.046080] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:12:55.782 [2024-12-13 09:24:08.046127] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:55.782 [2024-12-13 09:24:08.047095] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:55.782 are Threshold: 0% 00:12:55.782 Life Percentage Used: 0% 00:12:55.782 Data Units Read: 0 00:12:55.782 Data Units Written: 0 00:12:55.782 Host Read Commands: 0 00:12:55.782 Host Write Commands: 0 00:12:55.782 Controller Busy Time: 0 minutes 00:12:55.782 Power Cycles: 0 00:12:55.782 Power On Hours: 0 hours 00:12:55.782 Unsafe Shutdowns: 0 00:12:55.782 Unrecoverable Media Errors: 0 00:12:55.782 Lifetime Error Log Entries: 0 00:12:55.782 Warning Temperature Time: 0 minutes 00:12:55.782 Critical Temperature Time: 0 minutes 00:12:55.782 00:12:55.782 Number of Queues 00:12:55.782 ================ 00:12:55.782 Number of I/O Submission Queues: 127 00:12:55.782 Number of I/O Completion Queues: 127 00:12:55.782 00:12:55.782 Active Namespaces 00:12:55.782 ================= 00:12:55.782 Namespace ID:1 00:12:55.782 Error Recovery Timeout: Unlimited 00:12:55.782 Command Set Identifier: NVM (00h) 00:12:55.782 Deallocate: Supported 00:12:55.782 Deallocated/Unwritten Error: Not Supported 00:12:55.782 Deallocated Read Value: Unknown 00:12:55.782 Deallocate in Write Zeroes: Not Supported 00:12:55.782 Deallocated Guard Field: 0xFFFF 00:12:55.782 Flush: Supported 00:12:55.782 Reservation: Supported 00:12:55.782 Namespace Sharing Capabilities: Multiple Controllers 00:12:55.782 Size (in LBAs): 131072 (0GiB) 00:12:55.782 Capacity (in LBAs): 131072 (0GiB) 00:12:55.782 Utilization (in LBAs): 131072 (0GiB) 00:12:55.782 NGUID: D22D47C652EC4229A659D25BDD303E10 00:12:55.782 UUID: d22d47c6-52ec-4229-a659-d25bdd303e10 00:12:55.782 Thin Provisioning: Not Supported 00:12:55.782 Per-NS Atomic Units: Yes 00:12:55.782 Atomic Boundary Size (Normal): 0 00:12:55.782 Atomic Boundary Size (PFail): 0 00:12:55.782 Atomic Boundary Offset: 0 00:12:55.782 Maximum Single Source Range Length: 65535 00:12:55.782 Maximum Copy Length: 65535 00:12:55.782 Maximum Source Range Count: 1 00:12:55.782 NGUID/EUI64 Never Reused: No 00:12:55.782 Namespace Write Protected: No 00:12:55.782 Number of LBA Formats: 1 00:12:55.782 Current LBA Format: LBA Format #00 00:12:55.782 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:55.782 00:12:55.782 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:56.039 [2024-12-13 09:24:08.274283] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:01.297 Initializing NVMe Controllers 00:13:01.297 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:01.297 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:01.297 Initialization complete. Launching workers. 00:13:01.297 ======================================================== 00:13:01.297 Latency(us) 00:13:01.297 Device Information : IOPS MiB/s Average min max 00:13:01.297 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39894.58 155.84 3208.62 969.76 8603.45 00:13:01.297 ======================================================== 00:13:01.297 Total : 39894.58 155.84 3208.62 969.76 8603.45 00:13:01.297 00:13:01.297 [2024-12-13 09:24:13.296214] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:01.297 09:24:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:01.297 [2024-12-13 09:24:13.530273] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:06.554 Initializing NVMe Controllers 00:13:06.554 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:06.554 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:06.554 Initialization complete. Launching workers. 00:13:06.554 ======================================================== 00:13:06.554 Latency(us) 00:13:06.554 Device Information : IOPS MiB/s Average min max 00:13:06.554 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16060.51 62.74 7975.22 5984.54 8999.34 00:13:06.554 ======================================================== 00:13:06.554 Total : 16060.51 62.74 7975.22 5984.54 8999.34 00:13:06.554 00:13:06.554 [2024-12-13 09:24:18.569714] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:06.555 09:24:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:06.555 [2024-12-13 09:24:18.780690] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:11.816 [2024-12-13 09:24:23.840708] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:11.816 Initializing NVMe Controllers 00:13:11.816 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:11.816 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:11.816 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:11.816 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:11.816 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:11.816 Initialization complete. Launching workers. 00:13:11.816 Starting thread on core 2 00:13:11.816 Starting thread on core 3 00:13:11.816 Starting thread on core 1 00:13:11.816 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:11.816 [2024-12-13 09:24:24.142832] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:15.101 [2024-12-13 09:24:27.204285] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:15.101 Initializing NVMe Controllers 00:13:15.101 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:15.101 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:15.101 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:15.101 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:15.101 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:15.101 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:15.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:15.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:15.101 Initialization complete. Launching workers. 00:13:15.101 Starting thread on core 1 with urgent priority queue 00:13:15.101 Starting thread on core 2 with urgent priority queue 00:13:15.101 Starting thread on core 3 with urgent priority queue 00:13:15.101 Starting thread on core 0 with urgent priority queue 00:13:15.101 SPDK bdev Controller (SPDK1 ) core 0: 7802.00 IO/s 12.82 secs/100000 ios 00:13:15.101 SPDK bdev Controller (SPDK1 ) core 1: 8833.33 IO/s 11.32 secs/100000 ios 00:13:15.101 SPDK bdev Controller (SPDK1 ) core 2: 7436.00 IO/s 13.45 secs/100000 ios 00:13:15.101 SPDK bdev Controller (SPDK1 ) core 3: 9487.33 IO/s 10.54 secs/100000 ios 00:13:15.101 ======================================================== 00:13:15.101 00:13:15.101 09:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:15.359 [2024-12-13 09:24:27.498057] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:15.359 Initializing NVMe Controllers 00:13:15.359 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:15.359 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:15.359 Namespace ID: 1 size: 0GB 00:13:15.359 Initialization complete. 00:13:15.359 INFO: using host memory buffer for IO 00:13:15.359 Hello world! 00:13:15.359 [2024-12-13 09:24:27.532275] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:15.359 09:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:15.617 [2024-12-13 09:24:27.818843] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:16.551 Initializing NVMe Controllers 00:13:16.551 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:16.551 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:16.551 Initialization complete. Launching workers. 00:13:16.551 submit (in ns) avg, min, max = 7361.1, 3135.2, 3999700.0 00:13:16.551 complete (in ns) avg, min, max = 19638.9, 1715.2, 4994694.3 00:13:16.551 00:13:16.551 Submit histogram 00:13:16.551 ================ 00:13:16.551 Range in us Cumulative Count 00:13:16.551 3.124 - 3.139: 0.0062% ( 1) 00:13:16.551 3.139 - 3.154: 0.0124% ( 1) 00:13:16.551 3.154 - 3.170: 0.0311% ( 3) 00:13:16.551 3.170 - 3.185: 0.0559% ( 4) 00:13:16.551 3.185 - 3.200: 0.0994% ( 7) 00:13:16.551 3.200 - 3.215: 0.6153% ( 83) 00:13:16.551 3.215 - 3.230: 3.1075% ( 401) 00:13:16.551 3.230 - 3.246: 7.8123% ( 757) 00:13:16.551 3.246 - 3.261: 13.3996% ( 899) 00:13:16.551 3.261 - 3.276: 19.6955% ( 1013) 00:13:16.551 3.276 - 3.291: 26.9422% ( 1166) 00:13:16.551 3.291 - 3.307: 33.2132% ( 1009) 00:13:16.551 3.307 - 3.322: 38.6265% ( 871) 00:13:16.551 3.322 - 3.337: 43.9776% ( 861) 00:13:16.551 3.337 - 3.352: 48.5519% ( 736) 00:13:16.551 3.352 - 3.368: 52.2623% ( 597) 00:13:16.551 3.368 - 3.383: 57.7377% ( 881) 00:13:16.551 3.383 - 3.398: 64.6364% ( 1110) 00:13:16.551 3.398 - 3.413: 69.6955% ( 814) 00:13:16.551 3.413 - 3.429: 75.4195% ( 921) 00:13:16.551 3.429 - 3.444: 80.1865% ( 767) 00:13:16.551 3.444 - 3.459: 83.3188% ( 504) 00:13:16.551 3.459 - 3.474: 85.4630% ( 345) 00:13:16.551 3.474 - 3.490: 86.8117% ( 217) 00:13:16.551 3.490 - 3.505: 87.5637% ( 121) 00:13:16.551 3.505 - 3.520: 88.2722% ( 114) 00:13:16.551 3.520 - 3.535: 89.0864% ( 131) 00:13:16.551 3.535 - 3.550: 89.9378% ( 137) 00:13:16.551 3.550 - 3.566: 90.8142% ( 141) 00:13:16.551 3.566 - 3.581: 91.5475% ( 118) 00:13:16.551 3.581 - 3.596: 92.3120% ( 123) 00:13:16.551 3.596 - 3.611: 93.2070% ( 144) 00:13:16.551 3.611 - 3.627: 94.0398% ( 134) 00:13:16.551 3.627 - 3.642: 95.0093% ( 156) 00:13:16.551 3.642 - 3.657: 95.8173% ( 130) 00:13:16.551 3.657 - 3.672: 96.6998% ( 142) 00:13:16.551 3.672 - 3.688: 97.3275% ( 101) 00:13:16.551 3.688 - 3.703: 97.9863% ( 106) 00:13:16.551 3.703 - 3.718: 98.3406% ( 57) 00:13:16.551 3.718 - 3.733: 98.7570% ( 67) 00:13:16.551 3.733 - 3.749: 99.0180% ( 42) 00:13:16.551 3.749 - 3.764: 99.1920% ( 28) 00:13:16.551 3.764 - 3.779: 99.3474% ( 25) 00:13:16.551 3.779 - 3.794: 99.4593% ( 18) 00:13:16.551 3.794 - 3.810: 99.4842% ( 4) 00:13:16.551 3.810 - 3.825: 99.5090% ( 4) 00:13:16.551 3.825 - 3.840: 99.5277% ( 3) 00:13:16.551 3.840 - 3.855: 99.5401% ( 2) 00:13:16.551 3.855 - 3.870: 99.5587% ( 3) 00:13:16.551 3.870 - 3.886: 99.5649% ( 1) 00:13:16.551 4.998 - 5.029: 99.5712% ( 1) 00:13:16.551 5.150 - 5.181: 99.5774% ( 1) 00:13:16.551 5.211 - 5.242: 99.5836% ( 1) 00:13:16.551 5.333 - 5.364: 99.5960% ( 2) 00:13:16.551 5.394 - 5.425: 99.6022% ( 1) 00:13:16.551 5.425 - 5.455: 99.6147% ( 2) 00:13:16.551 5.547 - 5.577: 99.6209% ( 1) 00:13:16.551 5.577 - 5.608: 99.6271% ( 1) 00:13:16.551 5.669 - 5.699: 99.6333% ( 1) 00:13:16.551 5.760 - 5.790: 99.6395% ( 1) 00:13:16.551 5.821 - 5.851: 99.6457% ( 1) 00:13:16.551 5.851 - 5.882: 99.6520% ( 1) 00:13:16.551 5.912 - 5.943: 99.6582% ( 1) 00:13:16.551 6.095 - 6.126: 99.6644% ( 1) 00:13:16.551 6.126 - 6.156: 99.6706% ( 1) 00:13:16.551 6.187 - 6.217: 99.6768% ( 1) 00:13:16.551 6.217 - 6.248: 99.6830% ( 1) 00:13:16.551 6.339 - 6.370: 99.6892% ( 1) 00:13:16.551 6.400 - 6.430: 99.6955% ( 1) 00:13:16.551 6.430 - 6.461: 99.7017% ( 1) 00:13:16.551 6.461 - 6.491: 99.7079% ( 1) 00:13:16.551 6.491 - 6.522: 99.7141% ( 1) 00:13:16.551 6.613 - 6.644: 99.7203% ( 1) 00:13:16.551 6.644 - 6.674: 99.7328% ( 2) 00:13:16.551 6.766 - 6.796: 99.7390% ( 1) 00:13:16.551 6.827 - 6.857: 99.7452% ( 1) 00:13:16.551 7.162 - 7.192: 99.7514% ( 1) 00:13:16.551 7.223 - 7.253: 99.7576% ( 1) 00:13:16.551 7.253 - 7.284: 99.7700% ( 2) 00:13:16.551 [2024-12-13 09:24:28.840955] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:16.551 7.375 - 7.406: 99.7825% ( 2) 00:13:16.551 7.436 - 7.467: 99.7887% ( 1) 00:13:16.551 7.467 - 7.497: 99.8011% ( 2) 00:13:16.551 7.528 - 7.558: 99.8073% ( 1) 00:13:16.551 7.619 - 7.650: 99.8135% ( 1) 00:13:16.551 7.680 - 7.710: 99.8198% ( 1) 00:13:16.551 7.741 - 7.771: 99.8322% ( 2) 00:13:16.551 7.802 - 7.863: 99.8384% ( 1) 00:13:16.551 7.863 - 7.924: 99.8446% ( 1) 00:13:16.551 7.985 - 8.046: 99.8508% ( 1) 00:13:16.551 8.046 - 8.107: 99.8571% ( 1) 00:13:16.551 8.411 - 8.472: 99.8633% ( 1) 00:13:16.551 8.533 - 8.594: 99.8695% ( 1) 00:13:16.551 8.716 - 8.777: 99.8757% ( 1) 00:13:16.551 8.960 - 9.021: 99.8819% ( 1) 00:13:16.551 9.021 - 9.082: 99.8881% ( 1) 00:13:16.551 9.630 - 9.691: 99.8943% ( 1) 00:13:16.551 39.985 - 40.229: 99.9006% ( 1) 00:13:16.551 3994.575 - 4025.783: 100.0000% ( 16) 00:13:16.551 00:13:16.551 Complete histogram 00:13:16.551 ================== 00:13:16.551 Range in us Cumulative Count 00:13:16.551 1.714 - 1.722: 0.0311% ( 5) 00:13:16.551 1.722 - 1.730: 0.1554% ( 20) 00:13:16.551 1.730 - 1.737: 0.2548% ( 16) 00:13:16.551 1.737 - 1.745: 0.2921% ( 6) 00:13:16.551 1.745 - 1.752: 0.2983% ( 1) 00:13:16.551 1.752 - 1.760: 0.5718% ( 44) 00:13:16.551 1.760 - 1.768: 5.8484% ( 849) 00:13:16.551 1.768 - 1.775: 25.5065% ( 3163) 00:13:16.551 1.775 - 1.783: 46.1840% ( 3327) 00:13:16.551 1.783 - 1.790: 54.5805% ( 1351) 00:13:16.551 1.790 - 1.798: 58.0236% ( 554) 00:13:16.551 1.798 - 1.806: 62.0447% ( 647) 00:13:16.551 1.806 - 1.813: 70.2610% ( 1322) 00:13:16.551 1.813 - 1.821: 82.2561% ( 1930) 00:13:16.551 1.821 - 1.829: 90.3418% ( 1301) 00:13:16.551 1.829 - 1.836: 93.6793% ( 537) 00:13:16.551 1.836 - 1.844: 95.7178% ( 328) 00:13:16.551 1.844 - 1.851: 97.2902% ( 253) 00:13:16.551 1.851 - 1.859: 98.1168% ( 133) 00:13:16.551 1.859 - 1.867: 98.6886% ( 92) 00:13:16.552 1.867 - 1.874: 98.9559% ( 43) 00:13:16.552 1.874 - 1.882: 99.0988% ( 23) 00:13:16.552 1.882 - 1.890: 99.1983% ( 16) 00:13:16.552 1.890 - 1.897: 99.2728% ( 12) 00:13:16.552 1.897 - 1.905: 99.3101% ( 6) 00:13:16.552 1.905 - 1.912: 99.3412% ( 5) 00:13:16.552 1.912 - 1.920: 99.3599% ( 3) 00:13:16.552 1.920 - 1.928: 99.3723% ( 2) 00:13:16.552 1.928 - 1.935: 99.3785% ( 1) 00:13:16.552 1.935 - 1.943: 99.3909% ( 2) 00:13:16.552 2.088 - 2.103: 99.3971% ( 1) 00:13:16.552 2.179 - 2.194: 99.4034% ( 1) 00:13:16.552 2.194 - 2.210: 99.4096% ( 1) 00:13:16.552 3.307 - 3.322: 99.4158% ( 1) 00:13:16.552 3.657 - 3.672: 99.4220% ( 1) 00:13:16.552 3.794 - 3.810: 99.4282% ( 1) 00:13:16.552 3.840 - 3.855: 99.4344% ( 1) 00:13:16.552 4.175 - 4.206: 99.4406% ( 1) 00:13:16.552 4.480 - 4.510: 99.4469% ( 1) 00:13:16.552 4.907 - 4.937: 99.4531% ( 1) 00:13:16.552 5.272 - 5.303: 99.4593% ( 1) 00:13:16.552 5.547 - 5.577: 99.4655% ( 1) 00:13:16.552 5.882 - 5.912: 99.4717% ( 1) 00:13:16.552 5.973 - 6.004: 99.4779% ( 1) 00:13:16.552 6.309 - 6.339: 99.4842% ( 1) 00:13:16.552 6.644 - 6.674: 99.4904% ( 1) 00:13:16.552 6.827 - 6.857: 99.4966% ( 1) 00:13:16.552 6.857 - 6.888: 99.5028% ( 1) 00:13:16.552 7.010 - 7.040: 99.5090% ( 1) 00:13:16.552 7.101 - 7.131: 99.5152% ( 1) 00:13:16.552 7.375 - 7.406: 99.5214% ( 1) 00:13:16.552 7.436 - 7.467: 99.5277% ( 1) 00:13:16.552 13.105 - 13.166: 99.5339% ( 1) 00:13:16.552 14.141 - 14.202: 99.5401% ( 1) 00:13:16.552 30.476 - 30.598: 99.5463% ( 1) 00:13:16.552 44.130 - 44.373: 99.5525% ( 1) 00:13:16.552 2309.364 - 2324.968: 99.5587% ( 1) 00:13:16.552 3994.575 - 4025.783: 99.9938% ( 70) 00:13:16.552 4993.219 - 5024.427: 100.0000% ( 1) 00:13:16.552 00:13:16.552 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:16.552 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:16.552 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:16.552 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:16.552 09:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:16.809 [ 00:13:16.809 { 00:13:16.809 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:16.809 "subtype": "Discovery", 00:13:16.809 "listen_addresses": [], 00:13:16.809 "allow_any_host": true, 00:13:16.809 "hosts": [] 00:13:16.809 }, 00:13:16.809 { 00:13:16.809 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:16.809 "subtype": "NVMe", 00:13:16.809 "listen_addresses": [ 00:13:16.809 { 00:13:16.809 "trtype": "VFIOUSER", 00:13:16.809 "adrfam": "IPv4", 00:13:16.809 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:16.809 "trsvcid": "0" 00:13:16.809 } 00:13:16.809 ], 00:13:16.809 "allow_any_host": true, 00:13:16.809 "hosts": [], 00:13:16.809 "serial_number": "SPDK1", 00:13:16.809 "model_number": "SPDK bdev Controller", 00:13:16.809 "max_namespaces": 32, 00:13:16.809 "min_cntlid": 1, 00:13:16.809 "max_cntlid": 65519, 00:13:16.809 "namespaces": [ 00:13:16.809 { 00:13:16.809 "nsid": 1, 00:13:16.809 "bdev_name": "Malloc1", 00:13:16.809 "name": "Malloc1", 00:13:16.809 "nguid": "D22D47C652EC4229A659D25BDD303E10", 00:13:16.809 "uuid": "d22d47c6-52ec-4229-a659-d25bdd303e10" 00:13:16.809 } 00:13:16.809 ] 00:13:16.809 }, 00:13:16.809 { 00:13:16.809 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:16.809 "subtype": "NVMe", 00:13:16.809 "listen_addresses": [ 00:13:16.809 { 00:13:16.809 "trtype": "VFIOUSER", 00:13:16.809 "adrfam": "IPv4", 00:13:16.809 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:16.809 "trsvcid": "0" 00:13:16.809 } 00:13:16.809 ], 00:13:16.809 "allow_any_host": true, 00:13:16.809 "hosts": [], 00:13:16.809 "serial_number": "SPDK2", 00:13:16.809 "model_number": "SPDK bdev Controller", 00:13:16.809 "max_namespaces": 32, 00:13:16.809 "min_cntlid": 1, 00:13:16.809 "max_cntlid": 65519, 00:13:16.809 "namespaces": [ 00:13:16.809 { 00:13:16.809 "nsid": 1, 00:13:16.809 "bdev_name": "Malloc2", 00:13:16.809 "name": "Malloc2", 00:13:16.809 "nguid": "C1AE331C7733484FAD605FF95CF74A22", 00:13:16.809 "uuid": "c1ae331c-7733-484f-ad60-5ff95cf74a22" 00:13:16.809 } 00:13:16.809 ] 00:13:16.809 } 00:13:16.809 ] 00:13:16.809 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:16.809 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3290423 00:13:16.809 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:16.809 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:16.809 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:16.809 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:16.809 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:16.809 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:16.809 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:16.809 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:17.067 [2024-12-13 09:24:29.245915] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:17.067 Malloc3 00:13:17.067 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:17.325 [2024-12-13 09:24:29.471692] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:17.325 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:17.325 Asynchronous Event Request test 00:13:17.325 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:17.325 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:17.325 Registering asynchronous event callbacks... 00:13:17.325 Starting namespace attribute notice tests for all controllers... 00:13:17.325 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:17.325 aer_cb - Changed Namespace 00:13:17.325 Cleaning up... 00:13:17.325 [ 00:13:17.325 { 00:13:17.325 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:17.325 "subtype": "Discovery", 00:13:17.325 "listen_addresses": [], 00:13:17.325 "allow_any_host": true, 00:13:17.325 "hosts": [] 00:13:17.325 }, 00:13:17.325 { 00:13:17.325 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:17.325 "subtype": "NVMe", 00:13:17.325 "listen_addresses": [ 00:13:17.325 { 00:13:17.325 "trtype": "VFIOUSER", 00:13:17.325 "adrfam": "IPv4", 00:13:17.325 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:17.325 "trsvcid": "0" 00:13:17.325 } 00:13:17.325 ], 00:13:17.325 "allow_any_host": true, 00:13:17.325 "hosts": [], 00:13:17.325 "serial_number": "SPDK1", 00:13:17.325 "model_number": "SPDK bdev Controller", 00:13:17.325 "max_namespaces": 32, 00:13:17.325 "min_cntlid": 1, 00:13:17.325 "max_cntlid": 65519, 00:13:17.325 "namespaces": [ 00:13:17.325 { 00:13:17.325 "nsid": 1, 00:13:17.325 "bdev_name": "Malloc1", 00:13:17.325 "name": "Malloc1", 00:13:17.325 "nguid": "D22D47C652EC4229A659D25BDD303E10", 00:13:17.325 "uuid": "d22d47c6-52ec-4229-a659-d25bdd303e10" 00:13:17.325 }, 00:13:17.325 { 00:13:17.325 "nsid": 2, 00:13:17.325 "bdev_name": "Malloc3", 00:13:17.325 "name": "Malloc3", 00:13:17.325 "nguid": "6D814748D5DA44E4AB04CF7707884BAA", 00:13:17.325 "uuid": "6d814748-d5da-44e4-ab04-cf7707884baa" 00:13:17.325 } 00:13:17.325 ] 00:13:17.325 }, 00:13:17.325 { 00:13:17.325 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:17.325 "subtype": "NVMe", 00:13:17.325 "listen_addresses": [ 00:13:17.325 { 00:13:17.325 "trtype": "VFIOUSER", 00:13:17.325 "adrfam": "IPv4", 00:13:17.325 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:17.325 "trsvcid": "0" 00:13:17.325 } 00:13:17.325 ], 00:13:17.325 "allow_any_host": true, 00:13:17.325 "hosts": [], 00:13:17.325 "serial_number": "SPDK2", 00:13:17.325 "model_number": "SPDK bdev Controller", 00:13:17.325 "max_namespaces": 32, 00:13:17.325 "min_cntlid": 1, 00:13:17.325 "max_cntlid": 65519, 00:13:17.325 "namespaces": [ 00:13:17.325 { 00:13:17.325 "nsid": 1, 00:13:17.325 "bdev_name": "Malloc2", 00:13:17.325 "name": "Malloc2", 00:13:17.325 "nguid": "C1AE331C7733484FAD605FF95CF74A22", 00:13:17.325 "uuid": "c1ae331c-7733-484f-ad60-5ff95cf74a22" 00:13:17.325 } 00:13:17.325 ] 00:13:17.325 } 00:13:17.325 ] 00:13:17.325 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3290423 00:13:17.325 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:17.325 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:17.325 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:17.325 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:17.585 [2024-12-13 09:24:29.706321] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:13:17.585 [2024-12-13 09:24:29.706355] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290436 ] 00:13:17.585 [2024-12-13 09:24:29.744767] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:17.585 [2024-12-13 09:24:29.753690] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:17.585 [2024-12-13 09:24:29.753715] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1afacda000 00:13:17.585 [2024-12-13 09:24:29.754687] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.585 [2024-12-13 09:24:29.755694] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.585 [2024-12-13 09:24:29.756700] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.585 [2024-12-13 09:24:29.757716] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:17.585 [2024-12-13 09:24:29.758722] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:17.585 [2024-12-13 09:24:29.759726] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.585 [2024-12-13 09:24:29.760741] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:17.585 [2024-12-13 09:24:29.761751] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.585 [2024-12-13 09:24:29.762758] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:17.585 [2024-12-13 09:24:29.762768] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1afaccf000 00:13:17.585 [2024-12-13 09:24:29.763682] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:17.585 [2024-12-13 09:24:29.773047] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:17.585 [2024-12-13 09:24:29.773070] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:17.585 [2024-12-13 09:24:29.778151] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:17.585 [2024-12-13 09:24:29.778187] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:17.585 [2024-12-13 09:24:29.778257] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:17.585 [2024-12-13 09:24:29.778271] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:17.585 [2024-12-13 09:24:29.778276] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:17.585 [2024-12-13 09:24:29.779151] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:17.585 [2024-12-13 09:24:29.779161] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:17.585 [2024-12-13 09:24:29.779168] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:17.585 [2024-12-13 09:24:29.780159] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:17.585 [2024-12-13 09:24:29.780168] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:17.585 [2024-12-13 09:24:29.780177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:17.585 [2024-12-13 09:24:29.781172] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:17.585 [2024-12-13 09:24:29.781181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:17.585 [2024-12-13 09:24:29.782172] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:17.585 [2024-12-13 09:24:29.782180] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:17.585 [2024-12-13 09:24:29.782185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:17.585 [2024-12-13 09:24:29.782191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:17.585 [2024-12-13 09:24:29.782297] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:17.585 [2024-12-13 09:24:29.782302] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:17.585 [2024-12-13 09:24:29.782306] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:17.585 [2024-12-13 09:24:29.783188] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:17.585 [2024-12-13 09:24:29.784200] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:17.585 [2024-12-13 09:24:29.785206] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:17.585 [2024-12-13 09:24:29.786215] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:17.585 [2024-12-13 09:24:29.786251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:17.585 [2024-12-13 09:24:29.787222] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:17.585 [2024-12-13 09:24:29.787230] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:17.585 [2024-12-13 09:24:29.787234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:17.585 [2024-12-13 09:24:29.787251] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:17.585 [2024-12-13 09:24:29.787260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:17.585 [2024-12-13 09:24:29.787274] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:17.585 [2024-12-13 09:24:29.787278] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:17.585 [2024-12-13 09:24:29.787281] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:17.585 [2024-12-13 09:24:29.787291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:17.585 [2024-12-13 09:24:29.794454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:17.585 [2024-12-13 09:24:29.794465] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:17.585 [2024-12-13 09:24:29.794469] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:17.585 [2024-12-13 09:24:29.794473] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:17.585 [2024-12-13 09:24:29.794477] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:17.585 [2024-12-13 09:24:29.794482] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:17.585 [2024-12-13 09:24:29.794486] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:17.585 [2024-12-13 09:24:29.794490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:17.585 [2024-12-13 09:24:29.794497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:17.585 [2024-12-13 09:24:29.794506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:17.585 [2024-12-13 09:24:29.802454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:17.585 [2024-12-13 09:24:29.802467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.585 [2024-12-13 09:24:29.802474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.585 [2024-12-13 09:24:29.802482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.585 [2024-12-13 09:24:29.802489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.585 [2024-12-13 09:24:29.802493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:17.585 [2024-12-13 09:24:29.802503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.802512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:17.586 [2024-12-13 09:24:29.810454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:17.586 [2024-12-13 09:24:29.810462] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:17.586 [2024-12-13 09:24:29.810467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.810477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.810482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.810489] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:17.586 [2024-12-13 09:24:29.818454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:17.586 [2024-12-13 09:24:29.818508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.818515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.818522] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:17.586 [2024-12-13 09:24:29.818526] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:17.586 [2024-12-13 09:24:29.818529] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:17.586 [2024-12-13 09:24:29.818535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:17.586 [2024-12-13 09:24:29.826455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:17.586 [2024-12-13 09:24:29.826470] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:17.586 [2024-12-13 09:24:29.826477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.826483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.826489] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:17.586 [2024-12-13 09:24:29.826493] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:17.586 [2024-12-13 09:24:29.826496] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:17.586 [2024-12-13 09:24:29.826501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:17.586 [2024-12-13 09:24:29.834457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:17.586 [2024-12-13 09:24:29.834472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.834478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.834485] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:17.586 [2024-12-13 09:24:29.834489] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:17.586 [2024-12-13 09:24:29.834492] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:17.586 [2024-12-13 09:24:29.834497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:17.586 [2024-12-13 09:24:29.842454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:17.586 [2024-12-13 09:24:29.842466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.842472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.842479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.842484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.842490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.842495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.842500] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:17.586 [2024-12-13 09:24:29.842504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:17.586 [2024-12-13 09:24:29.842508] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:17.586 [2024-12-13 09:24:29.842523] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:17.586 [2024-12-13 09:24:29.850455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:17.586 [2024-12-13 09:24:29.850467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:17.586 [2024-12-13 09:24:29.858457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:17.586 [2024-12-13 09:24:29.858469] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:17.586 [2024-12-13 09:24:29.866456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:17.586 [2024-12-13 09:24:29.866469] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:17.586 [2024-12-13 09:24:29.874455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:17.586 [2024-12-13 09:24:29.874470] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:17.586 [2024-12-13 09:24:29.874474] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:17.586 [2024-12-13 09:24:29.874477] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:17.586 [2024-12-13 09:24:29.874480] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:17.586 [2024-12-13 09:24:29.874483] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:17.586 [2024-12-13 09:24:29.874489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:17.586 [2024-12-13 09:24:29.874495] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:17.586 [2024-12-13 09:24:29.874499] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:17.586 [2024-12-13 09:24:29.874502] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:17.586 [2024-12-13 09:24:29.874507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:17.586 [2024-12-13 09:24:29.874513] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:17.586 [2024-12-13 09:24:29.874517] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:17.586 [2024-12-13 09:24:29.874520] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:17.586 [2024-12-13 09:24:29.874525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:17.586 [2024-12-13 09:24:29.874534] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:17.586 [2024-12-13 09:24:29.874537] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:17.586 [2024-12-13 09:24:29.874540] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:17.586 [2024-12-13 09:24:29.874545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:17.586 [2024-12-13 09:24:29.882456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:17.586 [2024-12-13 09:24:29.882470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:17.586 [2024-12-13 09:24:29.882479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:17.586 [2024-12-13 09:24:29.882485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:17.586 ===================================================== 00:13:17.586 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:17.586 ===================================================== 00:13:17.586 Controller Capabilities/Features 00:13:17.586 ================================ 00:13:17.586 Vendor ID: 4e58 00:13:17.586 Subsystem Vendor ID: 4e58 00:13:17.586 Serial Number: SPDK2 00:13:17.586 Model Number: SPDK bdev Controller 00:13:17.586 Firmware Version: 25.01 00:13:17.586 Recommended Arb Burst: 6 00:13:17.586 IEEE OUI Identifier: 8d 6b 50 00:13:17.586 Multi-path I/O 00:13:17.586 May have multiple subsystem ports: Yes 00:13:17.586 May have multiple controllers: Yes 00:13:17.586 Associated with SR-IOV VF: No 00:13:17.586 Max Data Transfer Size: 131072 00:13:17.586 Max Number of Namespaces: 32 00:13:17.586 Max Number of I/O Queues: 127 00:13:17.586 NVMe Specification Version (VS): 1.3 00:13:17.586 NVMe Specification Version (Identify): 1.3 00:13:17.586 Maximum Queue Entries: 256 00:13:17.586 Contiguous Queues Required: Yes 00:13:17.586 Arbitration Mechanisms Supported 00:13:17.586 Weighted Round Robin: Not Supported 00:13:17.586 Vendor Specific: Not Supported 00:13:17.586 Reset Timeout: 15000 ms 00:13:17.586 Doorbell Stride: 4 bytes 00:13:17.586 NVM Subsystem Reset: Not Supported 00:13:17.586 Command Sets Supported 00:13:17.586 NVM Command Set: Supported 00:13:17.586 Boot Partition: Not Supported 00:13:17.586 Memory Page Size Minimum: 4096 bytes 00:13:17.586 Memory Page Size Maximum: 4096 bytes 00:13:17.586 Persistent Memory Region: Not Supported 00:13:17.586 Optional Asynchronous Events Supported 00:13:17.587 Namespace Attribute Notices: Supported 00:13:17.587 Firmware Activation Notices: Not Supported 00:13:17.587 ANA Change Notices: Not Supported 00:13:17.587 PLE Aggregate Log Change Notices: Not Supported 00:13:17.587 LBA Status Info Alert Notices: Not Supported 00:13:17.587 EGE Aggregate Log Change Notices: Not Supported 00:13:17.587 Normal NVM Subsystem Shutdown event: Not Supported 00:13:17.587 Zone Descriptor Change Notices: Not Supported 00:13:17.587 Discovery Log Change Notices: Not Supported 00:13:17.587 Controller Attributes 00:13:17.587 128-bit Host Identifier: Supported 00:13:17.587 Non-Operational Permissive Mode: Not Supported 00:13:17.587 NVM Sets: Not Supported 00:13:17.587 Read Recovery Levels: Not Supported 00:13:17.587 Endurance Groups: Not Supported 00:13:17.587 Predictable Latency Mode: Not Supported 00:13:17.587 Traffic Based Keep ALive: Not Supported 00:13:17.587 Namespace Granularity: Not Supported 00:13:17.587 SQ Associations: Not Supported 00:13:17.587 UUID List: Not Supported 00:13:17.587 Multi-Domain Subsystem: Not Supported 00:13:17.587 Fixed Capacity Management: Not Supported 00:13:17.587 Variable Capacity Management: Not Supported 00:13:17.587 Delete Endurance Group: Not Supported 00:13:17.587 Delete NVM Set: Not Supported 00:13:17.587 Extended LBA Formats Supported: Not Supported 00:13:17.587 Flexible Data Placement Supported: Not Supported 00:13:17.587 00:13:17.587 Controller Memory Buffer Support 00:13:17.587 ================================ 00:13:17.587 Supported: No 00:13:17.587 00:13:17.587 Persistent Memory Region Support 00:13:17.587 ================================ 00:13:17.587 Supported: No 00:13:17.587 00:13:17.587 Admin Command Set Attributes 00:13:17.587 ============================ 00:13:17.587 Security Send/Receive: Not Supported 00:13:17.587 Format NVM: Not Supported 00:13:17.587 Firmware Activate/Download: Not Supported 00:13:17.587 Namespace Management: Not Supported 00:13:17.587 Device Self-Test: Not Supported 00:13:17.587 Directives: Not Supported 00:13:17.587 NVMe-MI: Not Supported 00:13:17.587 Virtualization Management: Not Supported 00:13:17.587 Doorbell Buffer Config: Not Supported 00:13:17.587 Get LBA Status Capability: Not Supported 00:13:17.587 Command & Feature Lockdown Capability: Not Supported 00:13:17.587 Abort Command Limit: 4 00:13:17.587 Async Event Request Limit: 4 00:13:17.587 Number of Firmware Slots: N/A 00:13:17.587 Firmware Slot 1 Read-Only: N/A 00:13:17.587 Firmware Activation Without Reset: N/A 00:13:17.587 Multiple Update Detection Support: N/A 00:13:17.587 Firmware Update Granularity: No Information Provided 00:13:17.587 Per-Namespace SMART Log: No 00:13:17.587 Asymmetric Namespace Access Log Page: Not Supported 00:13:17.587 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:17.587 Command Effects Log Page: Supported 00:13:17.587 Get Log Page Extended Data: Supported 00:13:17.587 Telemetry Log Pages: Not Supported 00:13:17.587 Persistent Event Log Pages: Not Supported 00:13:17.587 Supported Log Pages Log Page: May Support 00:13:17.587 Commands Supported & Effects Log Page: Not Supported 00:13:17.587 Feature Identifiers & Effects Log Page:May Support 00:13:17.587 NVMe-MI Commands & Effects Log Page: May Support 00:13:17.587 Data Area 4 for Telemetry Log: Not Supported 00:13:17.587 Error Log Page Entries Supported: 128 00:13:17.587 Keep Alive: Supported 00:13:17.587 Keep Alive Granularity: 10000 ms 00:13:17.587 00:13:17.587 NVM Command Set Attributes 00:13:17.587 ========================== 00:13:17.587 Submission Queue Entry Size 00:13:17.587 Max: 64 00:13:17.587 Min: 64 00:13:17.587 Completion Queue Entry Size 00:13:17.587 Max: 16 00:13:17.587 Min: 16 00:13:17.587 Number of Namespaces: 32 00:13:17.587 Compare Command: Supported 00:13:17.587 Write Uncorrectable Command: Not Supported 00:13:17.587 Dataset Management Command: Supported 00:13:17.587 Write Zeroes Command: Supported 00:13:17.587 Set Features Save Field: Not Supported 00:13:17.587 Reservations: Not Supported 00:13:17.587 Timestamp: Not Supported 00:13:17.587 Copy: Supported 00:13:17.587 Volatile Write Cache: Present 00:13:17.587 Atomic Write Unit (Normal): 1 00:13:17.587 Atomic Write Unit (PFail): 1 00:13:17.587 Atomic Compare & Write Unit: 1 00:13:17.587 Fused Compare & Write: Supported 00:13:17.587 Scatter-Gather List 00:13:17.587 SGL Command Set: Supported (Dword aligned) 00:13:17.587 SGL Keyed: Not Supported 00:13:17.587 SGL Bit Bucket Descriptor: Not Supported 00:13:17.587 SGL Metadata Pointer: Not Supported 00:13:17.587 Oversized SGL: Not Supported 00:13:17.587 SGL Metadata Address: Not Supported 00:13:17.587 SGL Offset: Not Supported 00:13:17.587 Transport SGL Data Block: Not Supported 00:13:17.587 Replay Protected Memory Block: Not Supported 00:13:17.587 00:13:17.587 Firmware Slot Information 00:13:17.587 ========================= 00:13:17.587 Active slot: 1 00:13:17.587 Slot 1 Firmware Revision: 25.01 00:13:17.587 00:13:17.587 00:13:17.587 Commands Supported and Effects 00:13:17.587 ============================== 00:13:17.587 Admin Commands 00:13:17.587 -------------- 00:13:17.587 Get Log Page (02h): Supported 00:13:17.587 Identify (06h): Supported 00:13:17.587 Abort (08h): Supported 00:13:17.587 Set Features (09h): Supported 00:13:17.587 Get Features (0Ah): Supported 00:13:17.587 Asynchronous Event Request (0Ch): Supported 00:13:17.587 Keep Alive (18h): Supported 00:13:17.587 I/O Commands 00:13:17.587 ------------ 00:13:17.587 Flush (00h): Supported LBA-Change 00:13:17.587 Write (01h): Supported LBA-Change 00:13:17.587 Read (02h): Supported 00:13:17.587 Compare (05h): Supported 00:13:17.587 Write Zeroes (08h): Supported LBA-Change 00:13:17.587 Dataset Management (09h): Supported LBA-Change 00:13:17.587 Copy (19h): Supported LBA-Change 00:13:17.587 00:13:17.587 Error Log 00:13:17.587 ========= 00:13:17.587 00:13:17.587 Arbitration 00:13:17.587 =========== 00:13:17.587 Arbitration Burst: 1 00:13:17.587 00:13:17.587 Power Management 00:13:17.587 ================ 00:13:17.587 Number of Power States: 1 00:13:17.587 Current Power State: Power State #0 00:13:17.587 Power State #0: 00:13:17.587 Max Power: 0.00 W 00:13:17.587 Non-Operational State: Operational 00:13:17.587 Entry Latency: Not Reported 00:13:17.587 Exit Latency: Not Reported 00:13:17.587 Relative Read Throughput: 0 00:13:17.587 Relative Read Latency: 0 00:13:17.587 Relative Write Throughput: 0 00:13:17.587 Relative Write Latency: 0 00:13:17.587 Idle Power: Not Reported 00:13:17.587 Active Power: Not Reported 00:13:17.587 Non-Operational Permissive Mode: Not Supported 00:13:17.587 00:13:17.587 Health Information 00:13:17.587 ================== 00:13:17.587 Critical Warnings: 00:13:17.587 Available Spare Space: OK 00:13:17.587 Temperature: OK 00:13:17.587 Device Reliability: OK 00:13:17.587 Read Only: No 00:13:17.587 Volatile Memory Backup: OK 00:13:17.587 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:17.587 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:17.587 Available Spare: 0% 00:13:17.587 Available Sp[2024-12-13 09:24:29.882571] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:17.587 [2024-12-13 09:24:29.890456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:17.587 [2024-12-13 09:24:29.890489] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:17.587 [2024-12-13 09:24:29.890497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.587 [2024-12-13 09:24:29.890503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.587 [2024-12-13 09:24:29.890508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.587 [2024-12-13 09:24:29.890514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.587 [2024-12-13 09:24:29.890561] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:17.587 [2024-12-13 09:24:29.890573] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:17.587 [2024-12-13 09:24:29.891564] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:17.588 [2024-12-13 09:24:29.891607] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:17.588 [2024-12-13 09:24:29.891613] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:17.588 [2024-12-13 09:24:29.892570] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:17.588 [2024-12-13 09:24:29.892582] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:17.588 [2024-12-13 09:24:29.892628] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:17.588 [2024-12-13 09:24:29.893584] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:17.588 are Threshold: 0% 00:13:17.588 Life Percentage Used: 0% 00:13:17.588 Data Units Read: 0 00:13:17.588 Data Units Written: 0 00:13:17.588 Host Read Commands: 0 00:13:17.588 Host Write Commands: 0 00:13:17.588 Controller Busy Time: 0 minutes 00:13:17.588 Power Cycles: 0 00:13:17.588 Power On Hours: 0 hours 00:13:17.588 Unsafe Shutdowns: 0 00:13:17.588 Unrecoverable Media Errors: 0 00:13:17.588 Lifetime Error Log Entries: 0 00:13:17.588 Warning Temperature Time: 0 minutes 00:13:17.588 Critical Temperature Time: 0 minutes 00:13:17.588 00:13:17.588 Number of Queues 00:13:17.588 ================ 00:13:17.588 Number of I/O Submission Queues: 127 00:13:17.588 Number of I/O Completion Queues: 127 00:13:17.588 00:13:17.588 Active Namespaces 00:13:17.588 ================= 00:13:17.588 Namespace ID:1 00:13:17.588 Error Recovery Timeout: Unlimited 00:13:17.588 Command Set Identifier: NVM (00h) 00:13:17.588 Deallocate: Supported 00:13:17.588 Deallocated/Unwritten Error: Not Supported 00:13:17.588 Deallocated Read Value: Unknown 00:13:17.588 Deallocate in Write Zeroes: Not Supported 00:13:17.588 Deallocated Guard Field: 0xFFFF 00:13:17.588 Flush: Supported 00:13:17.588 Reservation: Supported 00:13:17.588 Namespace Sharing Capabilities: Multiple Controllers 00:13:17.588 Size (in LBAs): 131072 (0GiB) 00:13:17.588 Capacity (in LBAs): 131072 (0GiB) 00:13:17.588 Utilization (in LBAs): 131072 (0GiB) 00:13:17.588 NGUID: C1AE331C7733484FAD605FF95CF74A22 00:13:17.588 UUID: c1ae331c-7733-484f-ad60-5ff95cf74a22 00:13:17.588 Thin Provisioning: Not Supported 00:13:17.588 Per-NS Atomic Units: Yes 00:13:17.588 Atomic Boundary Size (Normal): 0 00:13:17.588 Atomic Boundary Size (PFail): 0 00:13:17.588 Atomic Boundary Offset: 0 00:13:17.588 Maximum Single Source Range Length: 65535 00:13:17.588 Maximum Copy Length: 65535 00:13:17.588 Maximum Source Range Count: 1 00:13:17.588 NGUID/EUI64 Never Reused: No 00:13:17.588 Namespace Write Protected: No 00:13:17.588 Number of LBA Formats: 1 00:13:17.588 Current LBA Format: LBA Format #00 00:13:17.588 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:17.588 00:13:17.588 09:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:17.846 [2024-12-13 09:24:30.131735] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:23.188 Initializing NVMe Controllers 00:13:23.188 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:23.188 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:23.188 Initialization complete. Launching workers. 00:13:23.188 ======================================================== 00:13:23.188 Latency(us) 00:13:23.188 Device Information : IOPS MiB/s Average min max 00:13:23.188 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39929.54 155.97 3207.17 989.65 10296.21 00:13:23.188 ======================================================== 00:13:23.188 Total : 39929.54 155.97 3207.17 989.65 10296.21 00:13:23.188 00:13:23.188 [2024-12-13 09:24:35.238706] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:23.188 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:23.188 [2024-12-13 09:24:35.477420] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:28.449 Initializing NVMe Controllers 00:13:28.449 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:28.449 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:28.449 Initialization complete. Launching workers. 00:13:28.449 ======================================================== 00:13:28.449 Latency(us) 00:13:28.449 Device Information : IOPS MiB/s Average min max 00:13:28.449 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39902.30 155.87 3207.67 978.55 7673.22 00:13:28.449 ======================================================== 00:13:28.449 Total : 39902.30 155.87 3207.67 978.55 7673.22 00:13:28.449 00:13:28.449 [2024-12-13 09:24:40.499443] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:28.449 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:28.449 [2024-12-13 09:24:40.700657] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:33.711 [2024-12-13 09:24:45.840554] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:33.711 Initializing NVMe Controllers 00:13:33.711 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:33.711 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:33.711 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:33.711 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:33.711 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:33.711 Initialization complete. Launching workers. 00:13:33.711 Starting thread on core 2 00:13:33.711 Starting thread on core 3 00:13:33.711 Starting thread on core 1 00:13:33.711 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:33.968 [2024-12-13 09:24:46.132066] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:38.150 [2024-12-13 09:24:49.791670] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:38.150 Initializing NVMe Controllers 00:13:38.150 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:38.150 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:38.150 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:38.150 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:38.150 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:38.150 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:38.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:38.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:38.150 Initialization complete. Launching workers. 00:13:38.150 Starting thread on core 1 with urgent priority queue 00:13:38.150 Starting thread on core 2 with urgent priority queue 00:13:38.150 Starting thread on core 3 with urgent priority queue 00:13:38.150 Starting thread on core 0 with urgent priority queue 00:13:38.150 SPDK bdev Controller (SPDK2 ) core 0: 1764.67 IO/s 56.67 secs/100000 ios 00:13:38.150 SPDK bdev Controller (SPDK2 ) core 1: 1602.00 IO/s 62.42 secs/100000 ios 00:13:38.150 SPDK bdev Controller (SPDK2 ) core 2: 1959.00 IO/s 51.05 secs/100000 ios 00:13:38.150 SPDK bdev Controller (SPDK2 ) core 3: 1568.00 IO/s 63.78 secs/100000 ios 00:13:38.150 ======================================================== 00:13:38.150 00:13:38.150 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:38.150 [2024-12-13 09:24:50.080915] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:38.150 Initializing NVMe Controllers 00:13:38.150 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:38.150 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:38.150 Namespace ID: 1 size: 0GB 00:13:38.150 Initialization complete. 00:13:38.150 INFO: using host memory buffer for IO 00:13:38.150 Hello world! 00:13:38.150 [2024-12-13 09:24:50.090975] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:38.150 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:38.150 [2024-12-13 09:24:50.378218] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:39.523 Initializing NVMe Controllers 00:13:39.523 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:39.523 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:39.523 Initialization complete. Launching workers. 00:13:39.523 submit (in ns) avg, min, max = 4868.8, 3180.0, 3999221.0 00:13:39.523 complete (in ns) avg, min, max = 22813.4, 1756.2, 4000225.7 00:13:39.523 00:13:39.523 Submit histogram 00:13:39.523 ================ 00:13:39.523 Range in us Cumulative Count 00:13:39.523 3.170 - 3.185: 0.0062% ( 1) 00:13:39.523 3.185 - 3.200: 0.1794% ( 28) 00:13:39.523 3.200 - 3.215: 1.2555% ( 174) 00:13:39.523 3.215 - 3.230: 4.1375% ( 466) 00:13:39.523 3.230 - 3.246: 8.0277% ( 629) 00:13:39.523 3.246 - 3.261: 12.9074% ( 789) 00:13:39.523 3.261 - 3.276: 19.6734% ( 1094) 00:13:39.523 3.276 - 3.291: 26.4209% ( 1091) 00:13:39.523 3.291 - 3.307: 32.5499% ( 991) 00:13:39.523 3.307 - 3.322: 38.5182% ( 965) 00:13:39.523 3.322 - 3.337: 43.9359% ( 876) 00:13:39.523 3.337 - 3.352: 48.6177% ( 757) 00:13:39.523 3.352 - 3.368: 53.1264% ( 729) 00:13:39.523 3.368 - 3.383: 59.3234% ( 1002) 00:13:39.523 3.383 - 3.398: 64.1722% ( 784) 00:13:39.523 3.398 - 3.413: 69.1570% ( 806) 00:13:39.523 3.413 - 3.429: 75.4654% ( 1020) 00:13:39.523 3.429 - 3.444: 79.7514% ( 693) 00:13:39.523 3.444 - 3.459: 83.1591% ( 551) 00:13:39.523 3.459 - 3.474: 85.5279% ( 383) 00:13:39.523 3.474 - 3.490: 87.0740% ( 250) 00:13:39.523 3.490 - 3.505: 87.8533% ( 126) 00:13:39.523 3.505 - 3.520: 88.4594% ( 98) 00:13:39.523 3.520 - 3.535: 89.0284% ( 92) 00:13:39.523 3.535 - 3.550: 89.7087% ( 110) 00:13:39.523 3.550 - 3.566: 90.4694% ( 123) 00:13:39.523 3.566 - 3.581: 91.4342% ( 156) 00:13:39.524 3.581 - 3.596: 92.2506% ( 132) 00:13:39.524 3.596 - 3.611: 93.0793% ( 134) 00:13:39.524 3.611 - 3.627: 93.9081% ( 134) 00:13:39.524 3.627 - 3.642: 94.8111% ( 146) 00:13:39.524 3.642 - 3.657: 95.6831% ( 141) 00:13:39.524 3.657 - 3.672: 96.5551% ( 141) 00:13:39.524 3.672 - 3.688: 97.2231% ( 108) 00:13:39.524 3.688 - 3.703: 97.7364% ( 83) 00:13:39.524 3.703 - 3.718: 98.2807% ( 88) 00:13:39.524 3.718 - 3.733: 98.6641% ( 62) 00:13:39.524 3.733 - 3.749: 98.9362% ( 44) 00:13:39.524 3.749 - 3.764: 99.1589% ( 36) 00:13:39.524 3.764 - 3.779: 99.3506% ( 31) 00:13:39.524 3.779 - 3.794: 99.4681% ( 19) 00:13:39.524 3.794 - 3.810: 99.5671% ( 16) 00:13:39.524 3.810 - 3.825: 99.6351% ( 11) 00:13:39.524 3.825 - 3.840: 99.6598% ( 4) 00:13:39.524 3.840 - 3.855: 99.6722% ( 2) 00:13:39.524 3.855 - 3.870: 99.6784% ( 1) 00:13:39.524 3.870 - 3.886: 99.6908% ( 2) 00:13:39.524 3.931 - 3.962: 99.6970% ( 1) 00:13:39.524 4.023 - 4.053: 99.7031% ( 1) 00:13:39.524 4.114 - 4.145: 99.7093% ( 1) 00:13:39.524 5.516 - 5.547: 99.7155% ( 1) 00:13:39.524 5.638 - 5.669: 99.7217% ( 1) 00:13:39.524 6.004 - 6.034: 99.7279% ( 1) 00:13:39.524 6.034 - 6.065: 99.7402% ( 2) 00:13:39.524 6.095 - 6.126: 99.7464% ( 1) 00:13:39.524 6.156 - 6.187: 99.7526% ( 1) 00:13:39.524 6.217 - 6.248: 99.7588% ( 1) 00:13:39.524 6.552 - 6.583: 99.7650% ( 1) 00:13:39.524 6.583 - 6.613: 99.7712% ( 1) 00:13:39.524 6.674 - 6.705: 99.7774% ( 1) 00:13:39.524 6.705 - 6.735: 99.7835% ( 1) 00:13:39.524 6.735 - 6.766: 99.7897% ( 1) 00:13:39.524 6.766 - 6.796: 99.7959% ( 1) 00:13:39.524 6.796 - 6.827: 99.8083% ( 2) 00:13:39.524 6.857 - 6.888: 99.8145% ( 1) 00:13:39.524 6.888 - 6.918: 99.8206% ( 1) 00:13:39.524 7.010 - 7.040: 99.8268% ( 1) 00:13:39.524 7.040 - 7.070: 99.8330% ( 1) 00:13:39.524 7.131 - 7.162: 99.8392% ( 1) 00:13:39.524 7.253 - 7.284: 99.8454% ( 1) 00:13:39.524 7.345 - 7.375: 99.8516% ( 1) 00:13:39.524 7.375 - 7.406: 99.8578% ( 1) 00:13:39.524 7.406 - 7.436: 99.8639% ( 1) 00:13:39.524 7.436 - 7.467: 99.8763% ( 2) 00:13:39.524 7.589 - 7.619: 99.8825% ( 1) 00:13:39.524 7.680 - 7.710: 99.8949% ( 2) 00:13:39.524 7.771 - 7.802: 99.9010% ( 1) 00:13:39.524 8.046 - 8.107: 99.9072% ( 1) 00:13:39.524 8.107 - 8.168: 99.9134% ( 1) 00:13:39.524 [2024-12-13 09:24:51.469422] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:39.524 8.777 - 8.838: 99.9196% ( 1) 00:13:39.524 8.899 - 8.960: 99.9258% ( 1) 00:13:39.524 9.082 - 9.143: 99.9382% ( 2) 00:13:39.524 9.204 - 9.265: 99.9443% ( 1) 00:13:39.524 9.813 - 9.874: 99.9505% ( 1) 00:13:39.524 11.337 - 11.398: 99.9567% ( 1) 00:13:39.524 13.105 - 13.166: 99.9629% ( 1) 00:13:39.524 3994.575 - 4025.783: 100.0000% ( 6) 00:13:39.524 00:13:39.524 Complete histogram 00:13:39.524 ================== 00:13:39.524 Range in us Cumulative Count 00:13:39.524 1.752 - 1.760: 0.0309% ( 5) 00:13:39.524 1.760 - 1.768: 0.9030% ( 141) 00:13:39.524 1.768 - 1.775: 9.8955% ( 1454) 00:13:39.524 1.775 - 1.783: 31.4614% ( 3487) 00:13:39.524 1.783 - 1.790: 45.8470% ( 2326) 00:13:39.524 1.790 - 1.798: 49.9536% ( 664) 00:13:39.524 1.798 - 1.806: 52.3966% ( 395) 00:13:39.524 1.806 - 1.813: 55.6683% ( 529) 00:13:39.524 1.813 - 1.821: 64.5123% ( 1430) 00:13:39.524 1.821 - 1.829: 80.0730% ( 2516) 00:13:39.524 1.829 - 1.836: 89.9746% ( 1601) 00:13:39.524 1.836 - 1.844: 93.4257% ( 558) 00:13:39.524 1.844 - 1.851: 95.3058% ( 304) 00:13:39.524 1.851 - 1.859: 96.8644% ( 252) 00:13:39.524 1.859 - 1.867: 97.7240% ( 139) 00:13:39.524 1.867 - 1.874: 98.0766% ( 57) 00:13:39.524 1.874 - 1.882: 98.3116% ( 38) 00:13:39.524 1.882 - 1.890: 98.4662% ( 25) 00:13:39.524 1.890 - 1.897: 98.6579% ( 31) 00:13:39.524 1.897 - 1.905: 98.8991% ( 39) 00:13:39.524 1.905 - 1.912: 99.0599% ( 26) 00:13:39.524 1.912 - 1.920: 99.1589% ( 16) 00:13:39.524 1.920 - 1.928: 99.1774% ( 3) 00:13:39.524 1.928 - 1.935: 99.1836% ( 1) 00:13:39.524 1.935 - 1.943: 99.1898% ( 1) 00:13:39.524 1.943 - 1.950: 99.2022% ( 2) 00:13:39.524 1.950 - 1.966: 99.2331% ( 5) 00:13:39.524 1.981 - 1.996: 99.2393% ( 1) 00:13:39.524 2.011 - 2.027: 99.2455% ( 1) 00:13:39.524 2.027 - 2.042: 99.2578% ( 2) 00:13:39.524 2.042 - 2.057: 99.2702% ( 2) 00:13:39.524 2.057 - 2.072: 99.2764% ( 1) 00:13:39.524 2.072 - 2.088: 99.2826% ( 1) 00:13:39.524 2.088 - 2.103: 99.2888% ( 1) 00:13:39.524 2.103 - 2.118: 99.2949% ( 1) 00:13:39.524 2.118 - 2.133: 99.3011% ( 1) 00:13:39.524 2.179 - 2.194: 99.3073% ( 1) 00:13:39.524 2.255 - 2.270: 99.3135% ( 1) 00:13:39.524 4.053 - 4.084: 99.3197% ( 1) 00:13:39.524 4.084 - 4.114: 99.3259% ( 1) 00:13:39.524 4.358 - 4.389: 99.3321% ( 1) 00:13:39.524 4.541 - 4.571: 99.3382% ( 1) 00:13:39.524 4.571 - 4.602: 99.3444% ( 1) 00:13:39.524 4.632 - 4.663: 99.3506% ( 1) 00:13:39.524 4.998 - 5.029: 99.3568% ( 1) 00:13:39.524 5.029 - 5.059: 99.3630% ( 1) 00:13:39.524 5.181 - 5.211: 99.3692% ( 1) 00:13:39.524 5.211 - 5.242: 99.3753% ( 1) 00:13:39.524 5.394 - 5.425: 99.3815% ( 1) 00:13:39.524 5.425 - 5.455: 99.3877% ( 1) 00:13:39.524 5.547 - 5.577: 99.4001% ( 2) 00:13:39.524 5.760 - 5.790: 99.4063% ( 1) 00:13:39.524 5.790 - 5.821: 99.4186% ( 2) 00:13:39.524 5.821 - 5.851: 99.4248% ( 1) 00:13:39.524 6.156 - 6.187: 99.4310% ( 1) 00:13:39.524 6.430 - 6.461: 99.4372% ( 1) 00:13:39.524 6.735 - 6.766: 99.4434% ( 1) 00:13:39.524 6.857 - 6.888: 99.4496% ( 1) 00:13:39.524 6.979 - 7.010: 99.4557% ( 1) 00:13:39.524 7.467 - 7.497: 99.4619% ( 1) 00:13:39.524 8.046 - 8.107: 99.4681% ( 1) 00:13:39.524 28.160 - 28.282: 99.4743% ( 1) 00:13:39.524 3994.575 - 4025.783: 100.0000% ( 85) 00:13:39.524 00:13:39.524 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:39.524 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:39.524 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:39.524 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:39.524 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:39.524 [ 00:13:39.524 { 00:13:39.524 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:39.524 "subtype": "Discovery", 00:13:39.524 "listen_addresses": [], 00:13:39.524 "allow_any_host": true, 00:13:39.524 "hosts": [] 00:13:39.524 }, 00:13:39.524 { 00:13:39.524 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:39.524 "subtype": "NVMe", 00:13:39.524 "listen_addresses": [ 00:13:39.524 { 00:13:39.524 "trtype": "VFIOUSER", 00:13:39.524 "adrfam": "IPv4", 00:13:39.524 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:39.524 "trsvcid": "0" 00:13:39.524 } 00:13:39.524 ], 00:13:39.524 "allow_any_host": true, 00:13:39.524 "hosts": [], 00:13:39.524 "serial_number": "SPDK1", 00:13:39.524 "model_number": "SPDK bdev Controller", 00:13:39.524 "max_namespaces": 32, 00:13:39.524 "min_cntlid": 1, 00:13:39.524 "max_cntlid": 65519, 00:13:39.524 "namespaces": [ 00:13:39.524 { 00:13:39.524 "nsid": 1, 00:13:39.524 "bdev_name": "Malloc1", 00:13:39.524 "name": "Malloc1", 00:13:39.524 "nguid": "D22D47C652EC4229A659D25BDD303E10", 00:13:39.524 "uuid": "d22d47c6-52ec-4229-a659-d25bdd303e10" 00:13:39.524 }, 00:13:39.524 { 00:13:39.524 "nsid": 2, 00:13:39.524 "bdev_name": "Malloc3", 00:13:39.524 "name": "Malloc3", 00:13:39.524 "nguid": "6D814748D5DA44E4AB04CF7707884BAA", 00:13:39.524 "uuid": "6d814748-d5da-44e4-ab04-cf7707884baa" 00:13:39.524 } 00:13:39.524 ] 00:13:39.524 }, 00:13:39.524 { 00:13:39.524 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:39.524 "subtype": "NVMe", 00:13:39.524 "listen_addresses": [ 00:13:39.524 { 00:13:39.524 "trtype": "VFIOUSER", 00:13:39.524 "adrfam": "IPv4", 00:13:39.524 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:39.524 "trsvcid": "0" 00:13:39.524 } 00:13:39.524 ], 00:13:39.524 "allow_any_host": true, 00:13:39.524 "hosts": [], 00:13:39.524 "serial_number": "SPDK2", 00:13:39.524 "model_number": "SPDK bdev Controller", 00:13:39.524 "max_namespaces": 32, 00:13:39.524 "min_cntlid": 1, 00:13:39.524 "max_cntlid": 65519, 00:13:39.524 "namespaces": [ 00:13:39.524 { 00:13:39.524 "nsid": 1, 00:13:39.524 "bdev_name": "Malloc2", 00:13:39.524 "name": "Malloc2", 00:13:39.524 "nguid": "C1AE331C7733484FAD605FF95CF74A22", 00:13:39.524 "uuid": "c1ae331c-7733-484f-ad60-5ff95cf74a22" 00:13:39.524 } 00:13:39.524 ] 00:13:39.525 } 00:13:39.525 ] 00:13:39.525 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:39.525 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:39.525 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3294019 00:13:39.525 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:39.525 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:39.525 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:39.525 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:39.525 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:39.525 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:39.525 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:39.525 [2024-12-13 09:24:51.854131] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:39.783 Malloc4 00:13:39.783 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:39.783 [2024-12-13 09:24:52.127180] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:39.783 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:40.040 Asynchronous Event Request test 00:13:40.040 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:40.040 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:40.040 Registering asynchronous event callbacks... 00:13:40.040 Starting namespace attribute notice tests for all controllers... 00:13:40.040 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:40.040 aer_cb - Changed Namespace 00:13:40.040 Cleaning up... 00:13:40.040 [ 00:13:40.040 { 00:13:40.040 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:40.040 "subtype": "Discovery", 00:13:40.040 "listen_addresses": [], 00:13:40.040 "allow_any_host": true, 00:13:40.040 "hosts": [] 00:13:40.040 }, 00:13:40.040 { 00:13:40.040 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:40.040 "subtype": "NVMe", 00:13:40.040 "listen_addresses": [ 00:13:40.040 { 00:13:40.040 "trtype": "VFIOUSER", 00:13:40.040 "adrfam": "IPv4", 00:13:40.040 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:40.040 "trsvcid": "0" 00:13:40.040 } 00:13:40.040 ], 00:13:40.040 "allow_any_host": true, 00:13:40.040 "hosts": [], 00:13:40.040 "serial_number": "SPDK1", 00:13:40.040 "model_number": "SPDK bdev Controller", 00:13:40.040 "max_namespaces": 32, 00:13:40.040 "min_cntlid": 1, 00:13:40.040 "max_cntlid": 65519, 00:13:40.040 "namespaces": [ 00:13:40.040 { 00:13:40.040 "nsid": 1, 00:13:40.040 "bdev_name": "Malloc1", 00:13:40.040 "name": "Malloc1", 00:13:40.040 "nguid": "D22D47C652EC4229A659D25BDD303E10", 00:13:40.040 "uuid": "d22d47c6-52ec-4229-a659-d25bdd303e10" 00:13:40.040 }, 00:13:40.040 { 00:13:40.040 "nsid": 2, 00:13:40.040 "bdev_name": "Malloc3", 00:13:40.040 "name": "Malloc3", 00:13:40.040 "nguid": "6D814748D5DA44E4AB04CF7707884BAA", 00:13:40.040 "uuid": "6d814748-d5da-44e4-ab04-cf7707884baa" 00:13:40.040 } 00:13:40.040 ] 00:13:40.040 }, 00:13:40.040 { 00:13:40.040 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:40.040 "subtype": "NVMe", 00:13:40.040 "listen_addresses": [ 00:13:40.040 { 00:13:40.040 "trtype": "VFIOUSER", 00:13:40.040 "adrfam": "IPv4", 00:13:40.040 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:40.040 "trsvcid": "0" 00:13:40.040 } 00:13:40.040 ], 00:13:40.040 "allow_any_host": true, 00:13:40.040 "hosts": [], 00:13:40.040 "serial_number": "SPDK2", 00:13:40.040 "model_number": "SPDK bdev Controller", 00:13:40.040 "max_namespaces": 32, 00:13:40.041 "min_cntlid": 1, 00:13:40.041 "max_cntlid": 65519, 00:13:40.041 "namespaces": [ 00:13:40.041 { 00:13:40.041 "nsid": 1, 00:13:40.041 "bdev_name": "Malloc2", 00:13:40.041 "name": "Malloc2", 00:13:40.041 "nguid": "C1AE331C7733484FAD605FF95CF74A22", 00:13:40.041 "uuid": "c1ae331c-7733-484f-ad60-5ff95cf74a22" 00:13:40.041 }, 00:13:40.041 { 00:13:40.041 "nsid": 2, 00:13:40.041 "bdev_name": "Malloc4", 00:13:40.041 "name": "Malloc4", 00:13:40.041 "nguid": "3DFAB334356D490E802794B7506AB9AF", 00:13:40.041 "uuid": "3dfab334-356d-490e-8027-94b7506ab9af" 00:13:40.041 } 00:13:40.041 ] 00:13:40.041 } 00:13:40.041 ] 00:13:40.041 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3294019 00:13:40.041 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:40.041 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3285995 00:13:40.041 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3285995 ']' 00:13:40.041 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3285995 00:13:40.041 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:40.041 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.041 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3285995 00:13:40.041 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:40.041 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:40.041 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3285995' 00:13:40.041 killing process with pid 3285995 00:13:40.041 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3285995 00:13:40.041 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3285995 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3294244 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3294244' 00:13:40.299 Process pid: 3294244 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3294244 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3294244 ']' 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.299 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:40.557 [2024-12-13 09:24:52.683251] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:40.557 [2024-12-13 09:24:52.684073] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:13:40.557 [2024-12-13 09:24:52.684109] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.557 [2024-12-13 09:24:52.750635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.557 [2024-12-13 09:24:52.791889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.557 [2024-12-13 09:24:52.791928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.557 [2024-12-13 09:24:52.791935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.557 [2024-12-13 09:24:52.791941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.557 [2024-12-13 09:24:52.791946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.557 [2024-12-13 09:24:52.793219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.557 [2024-12-13 09:24:52.793239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.557 [2024-12-13 09:24:52.793329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.557 [2024-12-13 09:24:52.793330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.557 [2024-12-13 09:24:52.860954] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:40.557 [2024-12-13 09:24:52.861061] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:40.557 [2024-12-13 09:24:52.861268] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:40.558 [2024-12-13 09:24:52.861536] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:40.558 [2024-12-13 09:24:52.861703] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:40.558 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.558 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:40.558 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:41.933 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:41.933 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:41.933 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:41.933 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:41.933 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:41.933 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:41.933 Malloc1 00:13:42.191 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:42.191 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:42.449 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:42.706 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:42.706 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:42.706 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:42.706 Malloc2 00:13:42.963 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:42.963 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:43.220 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:43.479 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:43.479 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3294244 00:13:43.479 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3294244 ']' 00:13:43.479 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3294244 00:13:43.479 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:43.479 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.479 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3294244 00:13:43.479 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:43.479 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:43.479 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3294244' 00:13:43.479 killing process with pid 3294244 00:13:43.479 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3294244 00:13:43.479 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3294244 00:13:43.738 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:43.738 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:43.738 00:13:43.738 real 0m51.286s 00:13:43.738 user 3m18.841s 00:13:43.738 sys 0m3.177s 00:13:43.738 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.738 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:43.738 ************************************ 00:13:43.738 END TEST nvmf_vfio_user 00:13:43.738 ************************************ 00:13:43.738 09:24:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:43.738 09:24:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:43.738 09:24:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.738 09:24:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:43.738 ************************************ 00:13:43.738 START TEST nvmf_vfio_user_nvme_compliance 00:13:43.738 ************************************ 00:13:43.738 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:43.738 * Looking for test storage... 00:13:43.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:43.738 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:43.738 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:13:43.738 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:13:43.997 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:43.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.998 --rc genhtml_branch_coverage=1 00:13:43.998 --rc genhtml_function_coverage=1 00:13:43.998 --rc genhtml_legend=1 00:13:43.998 --rc geninfo_all_blocks=1 00:13:43.998 --rc geninfo_unexecuted_blocks=1 00:13:43.998 00:13:43.998 ' 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:43.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.998 --rc genhtml_branch_coverage=1 00:13:43.998 --rc genhtml_function_coverage=1 00:13:43.998 --rc genhtml_legend=1 00:13:43.998 --rc geninfo_all_blocks=1 00:13:43.998 --rc geninfo_unexecuted_blocks=1 00:13:43.998 00:13:43.998 ' 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:43.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.998 --rc genhtml_branch_coverage=1 00:13:43.998 --rc genhtml_function_coverage=1 00:13:43.998 --rc genhtml_legend=1 00:13:43.998 --rc geninfo_all_blocks=1 00:13:43.998 --rc geninfo_unexecuted_blocks=1 00:13:43.998 00:13:43.998 ' 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:43.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.998 --rc genhtml_branch_coverage=1 00:13:43.998 --rc genhtml_function_coverage=1 00:13:43.998 --rc genhtml_legend=1 00:13:43.998 --rc geninfo_all_blocks=1 00:13:43.998 --rc geninfo_unexecuted_blocks=1 00:13:43.998 00:13:43.998 ' 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:43.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3294986 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3294986' 00:13:43.998 Process pid: 3294986 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3294986 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3294986 ']' 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.998 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.999 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:43.999 [2024-12-13 09:24:56.192034] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:13:43.999 [2024-12-13 09:24:56.192083] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.999 [2024-12-13 09:24:56.254319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:43.999 [2024-12-13 09:24:56.295891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.999 [2024-12-13 09:24:56.295924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.999 [2024-12-13 09:24:56.295931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.999 [2024-12-13 09:24:56.295936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.999 [2024-12-13 09:24:56.295941] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.999 [2024-12-13 09:24:56.297221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.999 [2024-12-13 09:24:56.297312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.999 [2024-12-13 09:24:56.297313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.256 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.256 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:13:44.256 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.200 malloc0 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.200 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:45.458 00:13:45.458 00:13:45.458 CUnit - A unit testing framework for C - Version 2.1-3 00:13:45.458 http://cunit.sourceforge.net/ 00:13:45.458 00:13:45.458 00:13:45.458 Suite: nvme_compliance 00:13:45.458 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-13 09:24:57.626875] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.458 [2024-12-13 09:24:57.628202] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:45.458 [2024-12-13 09:24:57.628216] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:45.458 [2024-12-13 09:24:57.628222] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:45.458 [2024-12-13 09:24:57.629900] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.458 passed 00:13:45.458 Test: admin_identify_ctrlr_verify_fused ...[2024-12-13 09:24:57.707426] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.458 [2024-12-13 09:24:57.710452] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.458 passed 00:13:45.458 Test: admin_identify_ns ...[2024-12-13 09:24:57.790648] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.716 [2024-12-13 09:24:57.850460] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:45.716 [2024-12-13 09:24:57.858458] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:45.716 [2024-12-13 09:24:57.879549] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.716 passed 00:13:45.716 Test: admin_get_features_mandatory_features ...[2024-12-13 09:24:57.955139] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.716 [2024-12-13 09:24:57.958169] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.716 passed 00:13:45.716 Test: admin_get_features_optional_features ...[2024-12-13 09:24:58.034690] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.716 [2024-12-13 09:24:58.037714] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.716 passed 00:13:45.973 Test: admin_set_features_number_of_queues ...[2024-12-13 09:24:58.115375] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.973 [2024-12-13 09:24:58.217543] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.973 passed 00:13:45.973 Test: admin_get_log_page_mandatory_logs ...[2024-12-13 09:24:58.292987] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.973 [2024-12-13 09:24:58.296012] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.973 passed 00:13:46.231 Test: admin_get_log_page_with_lpo ...[2024-12-13 09:24:58.374626] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.231 [2024-12-13 09:24:58.440459] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:46.231 [2024-12-13 09:24:58.455545] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.231 passed 00:13:46.231 Test: fabric_property_get ...[2024-12-13 09:24:58.527115] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.231 [2024-12-13 09:24:58.528353] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:46.231 [2024-12-13 09:24:58.530134] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.231 passed 00:13:46.488 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-13 09:24:58.607656] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.488 [2024-12-13 09:24:58.608899] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:46.488 [2024-12-13 09:24:58.610681] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.488 passed 00:13:46.488 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-13 09:24:58.686275] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.488 [2024-12-13 09:24:58.773464] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:46.488 [2024-12-13 09:24:58.789457] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:46.488 [2024-12-13 09:24:58.794536] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.488 passed 00:13:46.745 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-13 09:24:58.866131] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.745 [2024-12-13 09:24:58.867365] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:46.745 [2024-12-13 09:24:58.871157] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.745 passed 00:13:46.745 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-13 09:24:58.944846] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.745 [2024-12-13 09:24:59.021460] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:46.745 [2024-12-13 09:24:59.045456] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:46.745 [2024-12-13 09:24:59.050538] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.745 passed 00:13:47.003 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-13 09:24:59.124061] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.003 [2024-12-13 09:24:59.125298] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:47.003 [2024-12-13 09:24:59.125322] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:47.003 [2024-12-13 09:24:59.127082] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.003 passed 00:13:47.003 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-13 09:24:59.204730] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.003 [2024-12-13 09:24:59.297464] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:47.003 [2024-12-13 09:24:59.305468] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:47.003 [2024-12-13 09:24:59.313469] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:47.003 [2024-12-13 09:24:59.321455] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:47.003 [2024-12-13 09:24:59.350538] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.260 passed 00:13:47.260 Test: admin_create_io_sq_verify_pc ...[2024-12-13 09:24:59.424250] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.260 [2024-12-13 09:24:59.440463] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:47.260 [2024-12-13 09:24:59.458418] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.260 passed 00:13:47.260 Test: admin_create_io_qp_max_qps ...[2024-12-13 09:24:59.531938] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:48.631 [2024-12-13 09:25:00.622458] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:13:48.889 [2024-12-13 09:25:00.999093] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:48.889 passed 00:13:48.889 Test: admin_create_io_sq_shared_cq ...[2024-12-13 09:25:01.075627] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:48.889 [2024-12-13 09:25:01.207453] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:48.889 [2024-12-13 09:25:01.244513] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:49.148 passed 00:13:49.148 00:13:49.148 Run Summary: Type Total Ran Passed Failed Inactive 00:13:49.148 suites 1 1 n/a 0 0 00:13:49.148 tests 18 18 18 0 0 00:13:49.148 asserts 360 360 360 0 n/a 00:13:49.148 00:13:49.148 Elapsed time = 1.487 seconds 00:13:49.148 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3294986 00:13:49.148 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3294986 ']' 00:13:49.148 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3294986 00:13:49.148 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:13:49.148 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.148 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3294986 00:13:49.148 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.148 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.148 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3294986' 00:13:49.148 killing process with pid 3294986 00:13:49.148 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3294986 00:13:49.148 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3294986 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:49.406 00:13:49.406 real 0m5.533s 00:13:49.406 user 0m15.649s 00:13:49.406 sys 0m0.466s 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:49.406 ************************************ 00:13:49.406 END TEST nvmf_vfio_user_nvme_compliance 00:13:49.406 ************************************ 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:49.406 ************************************ 00:13:49.406 START TEST nvmf_vfio_user_fuzz 00:13:49.406 ************************************ 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:49.406 * Looking for test storage... 00:13:49.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.406 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:49.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.407 --rc genhtml_branch_coverage=1 00:13:49.407 --rc genhtml_function_coverage=1 00:13:49.407 --rc genhtml_legend=1 00:13:49.407 --rc geninfo_all_blocks=1 00:13:49.407 --rc geninfo_unexecuted_blocks=1 00:13:49.407 00:13:49.407 ' 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:49.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.407 --rc genhtml_branch_coverage=1 00:13:49.407 --rc genhtml_function_coverage=1 00:13:49.407 --rc genhtml_legend=1 00:13:49.407 --rc geninfo_all_blocks=1 00:13:49.407 --rc geninfo_unexecuted_blocks=1 00:13:49.407 00:13:49.407 ' 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:49.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.407 --rc genhtml_branch_coverage=1 00:13:49.407 --rc genhtml_function_coverage=1 00:13:49.407 --rc genhtml_legend=1 00:13:49.407 --rc geninfo_all_blocks=1 00:13:49.407 --rc geninfo_unexecuted_blocks=1 00:13:49.407 00:13:49.407 ' 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:49.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.407 --rc genhtml_branch_coverage=1 00:13:49.407 --rc genhtml_function_coverage=1 00:13:49.407 --rc genhtml_legend=1 00:13:49.407 --rc geninfo_all_blocks=1 00:13:49.407 --rc geninfo_unexecuted_blocks=1 00:13:49.407 00:13:49.407 ' 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:49.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:49.407 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3295946 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3295946' 00:13:49.665 Process pid: 3295946 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3295946 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3295946 ']' 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.665 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.666 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.666 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:49.666 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.666 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:13:49.666 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:51.039 malloc0 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:51.039 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:23.087 Fuzzing completed. Shutting down the fuzz application 00:14:23.087 00:14:23.087 Dumping successful admin opcodes: 00:14:23.087 9, 10, 00:14:23.087 Dumping successful io opcodes: 00:14:23.087 0, 00:14:23.087 NS: 0x20000081ef00 I/O qp, Total commands completed: 1040012, total successful commands: 4104, random_seed: 1381649728 00:14:23.087 NS: 0x20000081ef00 admin qp, Total commands completed: 252176, total successful commands: 59, random_seed: 3280125120 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3295946 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3295946 ']' 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3295946 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3295946 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3295946' 00:14:23.087 killing process with pid 3295946 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3295946 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3295946 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:23.087 00:14:23.087 real 0m32.163s 00:14:23.087 user 0m30.041s 00:14:23.087 sys 0m30.949s 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.087 ************************************ 00:14:23.087 END TEST nvmf_vfio_user_fuzz 00:14:23.087 ************************************ 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:23.087 ************************************ 00:14:23.087 START TEST nvmf_auth_target 00:14:23.087 ************************************ 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:23.087 * Looking for test storage... 00:14:23.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:23.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.087 --rc genhtml_branch_coverage=1 00:14:23.087 --rc genhtml_function_coverage=1 00:14:23.087 --rc genhtml_legend=1 00:14:23.087 --rc geninfo_all_blocks=1 00:14:23.087 --rc geninfo_unexecuted_blocks=1 00:14:23.087 00:14:23.087 ' 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:23.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.087 --rc genhtml_branch_coverage=1 00:14:23.087 --rc genhtml_function_coverage=1 00:14:23.087 --rc genhtml_legend=1 00:14:23.087 --rc geninfo_all_blocks=1 00:14:23.087 --rc geninfo_unexecuted_blocks=1 00:14:23.087 00:14:23.087 ' 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:23.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.087 --rc genhtml_branch_coverage=1 00:14:23.087 --rc genhtml_function_coverage=1 00:14:23.087 --rc genhtml_legend=1 00:14:23.087 --rc geninfo_all_blocks=1 00:14:23.087 --rc geninfo_unexecuted_blocks=1 00:14:23.087 00:14:23.087 ' 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:23.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.087 --rc genhtml_branch_coverage=1 00:14:23.087 --rc genhtml_function_coverage=1 00:14:23.087 --rc genhtml_legend=1 00:14:23.087 --rc geninfo_all_blocks=1 00:14:23.087 --rc geninfo_unexecuted_blocks=1 00:14:23.087 00:14:23.087 ' 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.087 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:23.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:23.088 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:27.266 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:27.266 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:27.267 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:27.267 Found net devices under 0000:af:00.0: cvl_0_0 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:27.267 Found net devices under 0000:af:00.1: cvl_0_1 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:27.267 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:27.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:14:27.267 00:14:27.267 --- 10.0.0.2 ping statistics --- 00:14:27.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.267 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:14:27.267 00:14:27.267 --- 10.0.0.1 ping statistics --- 00:14:27.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.267 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3304051 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3304051 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3304051 ']' 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3304074 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=252695c24a5d2b704b98ba4aa4198667d287006ce29447b3 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.xQg 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 252695c24a5d2b704b98ba4aa4198667d287006ce29447b3 0 00:14:27.267 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 252695c24a5d2b704b98ba4aa4198667d287006ce29447b3 0 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=252695c24a5d2b704b98ba4aa4198667d287006ce29447b3 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.xQg 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.xQg 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.xQg 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=31a30c4ad0990ca6b9c38040748bad7c0f35a026b1eb3ccff0f133ef72f1fe84 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Jed 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 31a30c4ad0990ca6b9c38040748bad7c0f35a026b1eb3ccff0f133ef72f1fe84 3 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 31a30c4ad0990ca6b9c38040748bad7c0f35a026b1eb3ccff0f133ef72f1fe84 3 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=31a30c4ad0990ca6b9c38040748bad7c0f35a026b1eb3ccff0f133ef72f1fe84 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Jed 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Jed 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Jed 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c8e9f2959ab5f7e66d774cb32db7505d 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.eeW 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c8e9f2959ab5f7e66d774cb32db7505d 1 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c8e9f2959ab5f7e66d774cb32db7505d 1 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c8e9f2959ab5f7e66d774cb32db7505d 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.eeW 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.eeW 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.eeW 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=da82ce02c3a16064faa96d00a32363b57977d7b76293a56f 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.hQb 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key da82ce02c3a16064faa96d00a32363b57977d7b76293a56f 2 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 da82ce02c3a16064faa96d00a32363b57977d7b76293a56f 2 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=da82ce02c3a16064faa96d00a32363b57977d7b76293a56f 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.hQb 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.hQb 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.hQb 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b9823ae6f29fc654daeaee37ac20c4a499e512250d98c55a 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Yt8 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b9823ae6f29fc654daeaee37ac20c4a499e512250d98c55a 2 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b9823ae6f29fc654daeaee37ac20c4a499e512250d98c55a 2 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b9823ae6f29fc654daeaee37ac20c4a499e512250d98c55a 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:27.268 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Yt8 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Yt8 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Yt8 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b807db0c622f031ff5bffab089190946 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.z8P 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b807db0c622f031ff5bffab089190946 1 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b807db0c622f031ff5bffab089190946 1 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b807db0c622f031ff5bffab089190946 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.z8P 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.z8P 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.z8P 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=37a37e2bb97a66552fedf50b2e4cbadaa08702586f420f9f025b68890a7d8fe3 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.L0h 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 37a37e2bb97a66552fedf50b2e4cbadaa08702586f420f9f025b68890a7d8fe3 3 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 37a37e2bb97a66552fedf50b2e4cbadaa08702586f420f9f025b68890a7d8fe3 3 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=37a37e2bb97a66552fedf50b2e4cbadaa08702586f420f9f025b68890a7d8fe3 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.L0h 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.L0h 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.L0h 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3304051 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3304051 ']' 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.526 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.782 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.782 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:27.782 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3304074 /var/tmp/host.sock 00:14:27.782 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3304074 ']' 00:14:27.782 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:27.782 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.782 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:27.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:27.782 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.782 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.782 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.782 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:27.782 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:27.783 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.783 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.039 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.039 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:28.039 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xQg 00:14:28.039 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.039 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.039 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.039 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.xQg 00:14:28.039 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.xQg 00:14:28.039 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Jed ]] 00:14:28.039 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jed 00:14:28.039 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.039 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.039 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.039 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jed 00:14:28.039 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jed 00:14:28.296 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:28.296 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.eeW 00:14:28.296 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.296 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.296 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.296 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.eeW 00:14:28.296 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.eeW 00:14:28.552 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.hQb ]] 00:14:28.552 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hQb 00:14:28.552 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.552 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.552 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.552 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hQb 00:14:28.552 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hQb 00:14:28.809 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:28.809 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Yt8 00:14:28.809 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.809 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.809 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.809 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Yt8 00:14:28.809 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Yt8 00:14:28.809 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.z8P ]] 00:14:28.809 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.z8P 00:14:28.809 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.809 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.809 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.809 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.z8P 00:14:28.809 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.z8P 00:14:29.065 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:29.065 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.L0h 00:14:29.065 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.065 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.065 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.065 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.L0h 00:14:29.065 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.L0h 00:14:29.322 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:29.322 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:29.322 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:29.322 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.322 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:29.322 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:29.579 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:29.579 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.579 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:29.579 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:29.579 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:29.579 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.579 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.579 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.579 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.579 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.579 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.579 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.579 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.579 00:14:29.835 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.835 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.835 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.835 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.835 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.835 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.835 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.835 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.835 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.835 { 00:14:29.835 "cntlid": 1, 00:14:29.835 "qid": 0, 00:14:29.835 "state": "enabled", 00:14:29.835 "thread": "nvmf_tgt_poll_group_000", 00:14:29.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:29.835 "listen_address": { 00:14:29.835 "trtype": "TCP", 00:14:29.835 "adrfam": "IPv4", 00:14:29.835 "traddr": "10.0.0.2", 00:14:29.835 "trsvcid": "4420" 00:14:29.835 }, 00:14:29.835 "peer_address": { 00:14:29.835 "trtype": "TCP", 00:14:29.835 "adrfam": "IPv4", 00:14:29.835 "traddr": "10.0.0.1", 00:14:29.835 "trsvcid": "44244" 00:14:29.835 }, 00:14:29.835 "auth": { 00:14:29.835 "state": "completed", 00:14:29.835 "digest": "sha256", 00:14:29.835 "dhgroup": "null" 00:14:29.835 } 00:14:29.835 } 00:14:29.835 ]' 00:14:29.835 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.835 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.835 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.091 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:30.091 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.091 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.091 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.091 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.348 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:14:30.348 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:14:30.909 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.909 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:30.909 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.909 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.909 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.909 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.909 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:30.909 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:30.909 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:30.909 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.909 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:30.909 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:30.909 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:30.909 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.910 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.910 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.910 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.910 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.910 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.910 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.910 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.165 00:14:31.165 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.165 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.165 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.421 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.421 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.421 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.421 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.421 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.421 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.421 { 00:14:31.421 "cntlid": 3, 00:14:31.421 "qid": 0, 00:14:31.421 "state": "enabled", 00:14:31.421 "thread": "nvmf_tgt_poll_group_000", 00:14:31.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:31.421 "listen_address": { 00:14:31.421 "trtype": "TCP", 00:14:31.421 "adrfam": "IPv4", 00:14:31.421 "traddr": "10.0.0.2", 00:14:31.421 "trsvcid": "4420" 00:14:31.421 }, 00:14:31.421 "peer_address": { 00:14:31.421 "trtype": "TCP", 00:14:31.421 "adrfam": "IPv4", 00:14:31.421 "traddr": "10.0.0.1", 00:14:31.421 "trsvcid": "44284" 00:14:31.421 }, 00:14:31.421 "auth": { 00:14:31.421 "state": "completed", 00:14:31.421 "digest": "sha256", 00:14:31.421 "dhgroup": "null" 00:14:31.421 } 00:14:31.421 } 00:14:31.421 ]' 00:14:31.421 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.421 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.421 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.421 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:31.421 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.678 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.678 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.678 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.678 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:14:31.678 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:14:32.240 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.240 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:32.240 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.240 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.240 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.240 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.240 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:32.240 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:32.496 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:32.496 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.496 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:32.496 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:32.497 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:32.497 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.497 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.497 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.497 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.497 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.497 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.497 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.497 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.753 00:14:32.753 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.753 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.753 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.010 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.010 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.010 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.010 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.010 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.010 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.010 { 00:14:33.010 "cntlid": 5, 00:14:33.010 "qid": 0, 00:14:33.010 "state": "enabled", 00:14:33.010 "thread": "nvmf_tgt_poll_group_000", 00:14:33.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:33.010 "listen_address": { 00:14:33.010 "trtype": "TCP", 00:14:33.010 "adrfam": "IPv4", 00:14:33.010 "traddr": "10.0.0.2", 00:14:33.010 "trsvcid": "4420" 00:14:33.010 }, 00:14:33.010 "peer_address": { 00:14:33.010 "trtype": "TCP", 00:14:33.010 "adrfam": "IPv4", 00:14:33.010 "traddr": "10.0.0.1", 00:14:33.010 "trsvcid": "44322" 00:14:33.010 }, 00:14:33.010 "auth": { 00:14:33.010 "state": "completed", 00:14:33.010 "digest": "sha256", 00:14:33.010 "dhgroup": "null" 00:14:33.010 } 00:14:33.010 } 00:14:33.010 ]' 00:14:33.010 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.010 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.010 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.010 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:33.010 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.010 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.010 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.010 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.266 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:14:33.266 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:14:33.828 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.828 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:33.828 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.828 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.828 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.828 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.828 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:33.828 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:34.085 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:34.085 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.085 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:34.085 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:34.085 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:34.085 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.085 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:14:34.085 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.085 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.085 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.085 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:34.085 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:34.085 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:34.342 00:14:34.342 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.342 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.342 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.599 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.599 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.599 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.599 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.599 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.599 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.599 { 00:14:34.599 "cntlid": 7, 00:14:34.599 "qid": 0, 00:14:34.599 "state": "enabled", 00:14:34.599 "thread": "nvmf_tgt_poll_group_000", 00:14:34.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:34.599 "listen_address": { 00:14:34.599 "trtype": "TCP", 00:14:34.599 "adrfam": "IPv4", 00:14:34.599 "traddr": "10.0.0.2", 00:14:34.599 "trsvcid": "4420" 00:14:34.599 }, 00:14:34.599 "peer_address": { 00:14:34.599 "trtype": "TCP", 00:14:34.599 "adrfam": "IPv4", 00:14:34.599 "traddr": "10.0.0.1", 00:14:34.599 "trsvcid": "51178" 00:14:34.599 }, 00:14:34.599 "auth": { 00:14:34.599 "state": "completed", 00:14:34.599 "digest": "sha256", 00:14:34.599 "dhgroup": "null" 00:14:34.599 } 00:14:34.599 } 00:14:34.599 ]' 00:14:34.599 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.599 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.599 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.599 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:34.599 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.599 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.599 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.599 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.856 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:14:34.856 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.418 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.675 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.675 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.675 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.675 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.675 00:14:35.931 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.931 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.931 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.931 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.931 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.931 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.931 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.931 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.931 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.931 { 00:14:35.931 "cntlid": 9, 00:14:35.931 "qid": 0, 00:14:35.931 "state": "enabled", 00:14:35.931 "thread": "nvmf_tgt_poll_group_000", 00:14:35.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:35.931 "listen_address": { 00:14:35.931 "trtype": "TCP", 00:14:35.931 "adrfam": "IPv4", 00:14:35.931 "traddr": "10.0.0.2", 00:14:35.931 "trsvcid": "4420" 00:14:35.931 }, 00:14:35.931 "peer_address": { 00:14:35.931 "trtype": "TCP", 00:14:35.931 "adrfam": "IPv4", 00:14:35.931 "traddr": "10.0.0.1", 00:14:35.931 "trsvcid": "51204" 00:14:35.931 }, 00:14:35.931 "auth": { 00:14:35.931 "state": "completed", 00:14:35.931 "digest": "sha256", 00:14:35.932 "dhgroup": "ffdhe2048" 00:14:35.932 } 00:14:35.932 } 00:14:35.932 ]' 00:14:35.932 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.932 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.188 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.188 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:36.188 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.188 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.188 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.188 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.445 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:14:36.445 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.008 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.265 00:14:37.265 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.265 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.265 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.521 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.521 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.521 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.521 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.521 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.521 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.521 { 00:14:37.521 "cntlid": 11, 00:14:37.521 "qid": 0, 00:14:37.521 "state": "enabled", 00:14:37.521 "thread": "nvmf_tgt_poll_group_000", 00:14:37.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:37.521 "listen_address": { 00:14:37.521 "trtype": "TCP", 00:14:37.521 "adrfam": "IPv4", 00:14:37.521 "traddr": "10.0.0.2", 00:14:37.521 "trsvcid": "4420" 00:14:37.521 }, 00:14:37.521 "peer_address": { 00:14:37.521 "trtype": "TCP", 00:14:37.521 "adrfam": "IPv4", 00:14:37.521 "traddr": "10.0.0.1", 00:14:37.521 "trsvcid": "51218" 00:14:37.521 }, 00:14:37.521 "auth": { 00:14:37.521 "state": "completed", 00:14:37.521 "digest": "sha256", 00:14:37.521 "dhgroup": "ffdhe2048" 00:14:37.521 } 00:14:37.521 } 00:14:37.521 ]' 00:14:37.521 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.521 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.522 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.522 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:37.778 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.778 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.778 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.778 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.778 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:14:37.778 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:14:38.341 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.341 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:38.341 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.341 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.341 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.341 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.341 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:38.341 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:38.598 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:38.598 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.598 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:38.598 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:38.598 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:38.598 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.598 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.598 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.598 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.598 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.598 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.598 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.598 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.855 00:14:38.855 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.855 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.855 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.111 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.111 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.111 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.111 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.111 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.111 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.111 { 00:14:39.111 "cntlid": 13, 00:14:39.111 "qid": 0, 00:14:39.111 "state": "enabled", 00:14:39.111 "thread": "nvmf_tgt_poll_group_000", 00:14:39.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:39.111 "listen_address": { 00:14:39.111 "trtype": "TCP", 00:14:39.111 "adrfam": "IPv4", 00:14:39.111 "traddr": "10.0.0.2", 00:14:39.111 "trsvcid": "4420" 00:14:39.111 }, 00:14:39.111 "peer_address": { 00:14:39.111 "trtype": "TCP", 00:14:39.111 "adrfam": "IPv4", 00:14:39.111 "traddr": "10.0.0.1", 00:14:39.111 "trsvcid": "51236" 00:14:39.111 }, 00:14:39.111 "auth": { 00:14:39.111 "state": "completed", 00:14:39.111 "digest": "sha256", 00:14:39.111 "dhgroup": "ffdhe2048" 00:14:39.111 } 00:14:39.111 } 00:14:39.111 ]' 00:14:39.111 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.111 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.111 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.111 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:39.111 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.111 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.111 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.111 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.368 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:14:39.368 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:14:39.931 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.931 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:39.931 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.931 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.931 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.931 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.931 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:39.931 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:40.187 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:40.187 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.188 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:40.188 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:40.188 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:40.188 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.188 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:14:40.188 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.188 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.188 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.188 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:40.188 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:40.188 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:40.444 00:14:40.444 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.444 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.444 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.807 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.807 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.807 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.807 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.807 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.807 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.807 { 00:14:40.807 "cntlid": 15, 00:14:40.807 "qid": 0, 00:14:40.807 "state": "enabled", 00:14:40.807 "thread": "nvmf_tgt_poll_group_000", 00:14:40.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:40.807 "listen_address": { 00:14:40.807 "trtype": "TCP", 00:14:40.807 "adrfam": "IPv4", 00:14:40.807 "traddr": "10.0.0.2", 00:14:40.807 "trsvcid": "4420" 00:14:40.807 }, 00:14:40.807 "peer_address": { 00:14:40.807 "trtype": "TCP", 00:14:40.807 "adrfam": "IPv4", 00:14:40.807 "traddr": "10.0.0.1", 00:14:40.807 "trsvcid": "51272" 00:14:40.807 }, 00:14:40.807 "auth": { 00:14:40.807 "state": "completed", 00:14:40.807 "digest": "sha256", 00:14:40.807 "dhgroup": "ffdhe2048" 00:14:40.807 } 00:14:40.807 } 00:14:40.807 ]' 00:14:40.807 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.807 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.807 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.807 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:40.808 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.808 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.808 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.808 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.085 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:14:41.085 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:14:41.676 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.676 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:41.676 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.676 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.676 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.676 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:41.676 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.676 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:41.676 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:41.933 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:41.933 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.934 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:41.934 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:41.934 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:41.934 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.934 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.934 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.934 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.934 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.934 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.934 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.934 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.190 00:14:42.190 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.190 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.190 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.447 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.447 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.447 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.447 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.447 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.447 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.447 { 00:14:42.447 "cntlid": 17, 00:14:42.447 "qid": 0, 00:14:42.447 "state": "enabled", 00:14:42.447 "thread": "nvmf_tgt_poll_group_000", 00:14:42.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:42.447 "listen_address": { 00:14:42.447 "trtype": "TCP", 00:14:42.447 "adrfam": "IPv4", 00:14:42.447 "traddr": "10.0.0.2", 00:14:42.447 "trsvcid": "4420" 00:14:42.447 }, 00:14:42.447 "peer_address": { 00:14:42.447 "trtype": "TCP", 00:14:42.447 "adrfam": "IPv4", 00:14:42.447 "traddr": "10.0.0.1", 00:14:42.447 "trsvcid": "51296" 00:14:42.447 }, 00:14:42.447 "auth": { 00:14:42.447 "state": "completed", 00:14:42.447 "digest": "sha256", 00:14:42.447 "dhgroup": "ffdhe3072" 00:14:42.447 } 00:14:42.447 } 00:14:42.447 ]' 00:14:42.448 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.448 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.448 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.448 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:42.448 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.448 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.448 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.448 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.704 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:14:42.704 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:14:43.269 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.269 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:43.269 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.269 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.269 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.269 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.269 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:43.269 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:43.526 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:43.526 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.526 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:43.526 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:43.526 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:43.526 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.526 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.526 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.526 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.526 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.526 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.526 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.526 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.782 00:14:43.782 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.782 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.782 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.039 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.039 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.039 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.039 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.040 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.040 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.040 { 00:14:44.040 "cntlid": 19, 00:14:44.040 "qid": 0, 00:14:44.040 "state": "enabled", 00:14:44.040 "thread": "nvmf_tgt_poll_group_000", 00:14:44.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:44.040 "listen_address": { 00:14:44.040 "trtype": "TCP", 00:14:44.040 "adrfam": "IPv4", 00:14:44.040 "traddr": "10.0.0.2", 00:14:44.040 "trsvcid": "4420" 00:14:44.040 }, 00:14:44.040 "peer_address": { 00:14:44.040 "trtype": "TCP", 00:14:44.040 "adrfam": "IPv4", 00:14:44.040 "traddr": "10.0.0.1", 00:14:44.040 "trsvcid": "51304" 00:14:44.040 }, 00:14:44.040 "auth": { 00:14:44.040 "state": "completed", 00:14:44.040 "digest": "sha256", 00:14:44.040 "dhgroup": "ffdhe3072" 00:14:44.040 } 00:14:44.040 } 00:14:44.040 ]' 00:14:44.040 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.040 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.040 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.040 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:44.040 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.040 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.040 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.040 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.296 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:14:44.296 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:14:44.859 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.859 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:44.859 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.859 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.859 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.859 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.859 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:44.859 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:45.115 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:45.115 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.115 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:45.115 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:45.115 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:45.115 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.115 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.115 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.115 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.115 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.115 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.116 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.116 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.372 00:14:45.372 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.372 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.372 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.629 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.629 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.629 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.629 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.629 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.629 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.629 { 00:14:45.629 "cntlid": 21, 00:14:45.629 "qid": 0, 00:14:45.629 "state": "enabled", 00:14:45.629 "thread": "nvmf_tgt_poll_group_000", 00:14:45.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:45.629 "listen_address": { 00:14:45.629 "trtype": "TCP", 00:14:45.629 "adrfam": "IPv4", 00:14:45.629 "traddr": "10.0.0.2", 00:14:45.629 "trsvcid": "4420" 00:14:45.629 }, 00:14:45.629 "peer_address": { 00:14:45.629 "trtype": "TCP", 00:14:45.629 "adrfam": "IPv4", 00:14:45.629 "traddr": "10.0.0.1", 00:14:45.629 "trsvcid": "36512" 00:14:45.629 }, 00:14:45.629 "auth": { 00:14:45.629 "state": "completed", 00:14:45.629 "digest": "sha256", 00:14:45.629 "dhgroup": "ffdhe3072" 00:14:45.629 } 00:14:45.629 } 00:14:45.629 ]' 00:14:45.629 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.629 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.629 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.629 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:45.629 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.629 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.629 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.630 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.886 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:14:45.886 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:14:46.449 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.449 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:46.449 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.449 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.449 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.449 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.449 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:46.449 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:46.705 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:46.706 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.706 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:46.706 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:46.706 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:46.706 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.706 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:14:46.706 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.706 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.706 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.706 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:46.706 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.706 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.963 00:14:46.963 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.963 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.963 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.220 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.220 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.220 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.220 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.220 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.220 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.220 { 00:14:47.220 "cntlid": 23, 00:14:47.220 "qid": 0, 00:14:47.220 "state": "enabled", 00:14:47.220 "thread": "nvmf_tgt_poll_group_000", 00:14:47.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:47.220 "listen_address": { 00:14:47.220 "trtype": "TCP", 00:14:47.220 "adrfam": "IPv4", 00:14:47.220 "traddr": "10.0.0.2", 00:14:47.220 "trsvcid": "4420" 00:14:47.220 }, 00:14:47.220 "peer_address": { 00:14:47.220 "trtype": "TCP", 00:14:47.220 "adrfam": "IPv4", 00:14:47.220 "traddr": "10.0.0.1", 00:14:47.220 "trsvcid": "36536" 00:14:47.220 }, 00:14:47.220 "auth": { 00:14:47.220 "state": "completed", 00:14:47.220 "digest": "sha256", 00:14:47.220 "dhgroup": "ffdhe3072" 00:14:47.220 } 00:14:47.220 } 00:14:47.220 ]' 00:14:47.220 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.220 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.220 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.220 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:47.220 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.220 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.220 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.220 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.476 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:14:47.476 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:14:48.040 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.041 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:48.041 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.041 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.041 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.041 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.041 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.041 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:48.041 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:48.298 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:48.298 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.298 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:48.298 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:48.298 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:48.298 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.298 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.298 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.298 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.298 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.298 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.298 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.298 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.555 00:14:48.555 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.555 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.555 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.811 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.811 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.811 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.811 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.811 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.811 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.811 { 00:14:48.811 "cntlid": 25, 00:14:48.811 "qid": 0, 00:14:48.811 "state": "enabled", 00:14:48.811 "thread": "nvmf_tgt_poll_group_000", 00:14:48.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:48.811 "listen_address": { 00:14:48.811 "trtype": "TCP", 00:14:48.811 "adrfam": "IPv4", 00:14:48.811 "traddr": "10.0.0.2", 00:14:48.811 "trsvcid": "4420" 00:14:48.811 }, 00:14:48.811 "peer_address": { 00:14:48.811 "trtype": "TCP", 00:14:48.811 "adrfam": "IPv4", 00:14:48.811 "traddr": "10.0.0.1", 00:14:48.811 "trsvcid": "36550" 00:14:48.811 }, 00:14:48.811 "auth": { 00:14:48.811 "state": "completed", 00:14:48.811 "digest": "sha256", 00:14:48.811 "dhgroup": "ffdhe4096" 00:14:48.811 } 00:14:48.811 } 00:14:48.811 ]' 00:14:48.811 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.811 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.811 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.811 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:48.811 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.811 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.811 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.811 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.067 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:14:49.067 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:14:49.631 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.631 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:49.631 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.631 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.631 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.631 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.631 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:49.631 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:49.889 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:49.889 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.889 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:49.889 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:49.889 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:49.889 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.889 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.889 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.889 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.889 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.889 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.889 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.889 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.146 00:14:50.146 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.146 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.146 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.403 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.403 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.403 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.403 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.403 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.403 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.403 { 00:14:50.403 "cntlid": 27, 00:14:50.403 "qid": 0, 00:14:50.403 "state": "enabled", 00:14:50.403 "thread": "nvmf_tgt_poll_group_000", 00:14:50.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:50.403 "listen_address": { 00:14:50.403 "trtype": "TCP", 00:14:50.403 "adrfam": "IPv4", 00:14:50.403 "traddr": "10.0.0.2", 00:14:50.403 "trsvcid": "4420" 00:14:50.403 }, 00:14:50.403 "peer_address": { 00:14:50.403 "trtype": "TCP", 00:14:50.403 "adrfam": "IPv4", 00:14:50.403 "traddr": "10.0.0.1", 00:14:50.403 "trsvcid": "36586" 00:14:50.403 }, 00:14:50.403 "auth": { 00:14:50.403 "state": "completed", 00:14:50.403 "digest": "sha256", 00:14:50.403 "dhgroup": "ffdhe4096" 00:14:50.403 } 00:14:50.403 } 00:14:50.403 ]' 00:14:50.403 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.403 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.403 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.403 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:50.403 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.403 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.403 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.403 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.660 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:14:50.660 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:14:51.224 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.224 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:51.224 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.224 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.224 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.224 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.224 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:51.224 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:51.481 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:51.481 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.481 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:51.481 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:51.481 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:51.481 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.482 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.482 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.482 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.482 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.482 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.482 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.482 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.739 00:14:51.739 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.739 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.739 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.996 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.996 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.996 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.996 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.996 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.996 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.996 { 00:14:51.996 "cntlid": 29, 00:14:51.996 "qid": 0, 00:14:51.996 "state": "enabled", 00:14:51.996 "thread": "nvmf_tgt_poll_group_000", 00:14:51.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:51.996 "listen_address": { 00:14:51.996 "trtype": "TCP", 00:14:51.996 "adrfam": "IPv4", 00:14:51.996 "traddr": "10.0.0.2", 00:14:51.996 "trsvcid": "4420" 00:14:51.996 }, 00:14:51.996 "peer_address": { 00:14:51.996 "trtype": "TCP", 00:14:51.996 "adrfam": "IPv4", 00:14:51.996 "traddr": "10.0.0.1", 00:14:51.996 "trsvcid": "36624" 00:14:51.996 }, 00:14:51.996 "auth": { 00:14:51.996 "state": "completed", 00:14:51.996 "digest": "sha256", 00:14:51.996 "dhgroup": "ffdhe4096" 00:14:51.996 } 00:14:51.996 } 00:14:51.996 ]' 00:14:51.996 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.996 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.996 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.996 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:51.996 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.996 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.996 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.996 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.253 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:14:52.253 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:14:52.817 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.817 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:52.817 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.817 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.817 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.817 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.817 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:52.817 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:53.074 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:53.074 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.074 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:53.074 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:53.074 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:53.074 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.074 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:14:53.074 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.074 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.074 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.074 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:53.074 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.074 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.330 00:14:53.330 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.330 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.330 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.587 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.587 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.587 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.587 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.587 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.587 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.587 { 00:14:53.587 "cntlid": 31, 00:14:53.587 "qid": 0, 00:14:53.587 "state": "enabled", 00:14:53.587 "thread": "nvmf_tgt_poll_group_000", 00:14:53.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:53.587 "listen_address": { 00:14:53.587 "trtype": "TCP", 00:14:53.587 "adrfam": "IPv4", 00:14:53.587 "traddr": "10.0.0.2", 00:14:53.587 "trsvcid": "4420" 00:14:53.587 }, 00:14:53.587 "peer_address": { 00:14:53.587 "trtype": "TCP", 00:14:53.587 "adrfam": "IPv4", 00:14:53.587 "traddr": "10.0.0.1", 00:14:53.587 "trsvcid": "36640" 00:14:53.587 }, 00:14:53.587 "auth": { 00:14:53.587 "state": "completed", 00:14:53.587 "digest": "sha256", 00:14:53.587 "dhgroup": "ffdhe4096" 00:14:53.587 } 00:14:53.587 } 00:14:53.587 ]' 00:14:53.587 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.587 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.587 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.587 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:53.587 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.587 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.587 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.587 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.844 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:14:53.844 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:14:54.407 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.407 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:54.407 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.407 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.407 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.407 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.407 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.407 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.407 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.664 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:54.664 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.664 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:54.664 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:54.664 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:54.664 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.664 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.664 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.664 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.664 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.664 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.664 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.664 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.921 00:14:55.178 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.178 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.178 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.178 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.178 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.178 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.178 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.178 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.178 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.178 { 00:14:55.178 "cntlid": 33, 00:14:55.178 "qid": 0, 00:14:55.178 "state": "enabled", 00:14:55.178 "thread": "nvmf_tgt_poll_group_000", 00:14:55.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:55.178 "listen_address": { 00:14:55.178 "trtype": "TCP", 00:14:55.178 "adrfam": "IPv4", 00:14:55.178 "traddr": "10.0.0.2", 00:14:55.178 "trsvcid": "4420" 00:14:55.178 }, 00:14:55.178 "peer_address": { 00:14:55.178 "trtype": "TCP", 00:14:55.178 "adrfam": "IPv4", 00:14:55.178 "traddr": "10.0.0.1", 00:14:55.178 "trsvcid": "58726" 00:14:55.178 }, 00:14:55.178 "auth": { 00:14:55.178 "state": "completed", 00:14:55.178 "digest": "sha256", 00:14:55.178 "dhgroup": "ffdhe6144" 00:14:55.178 } 00:14:55.178 } 00:14:55.178 ]' 00:14:55.178 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.178 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.178 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.435 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:55.435 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.435 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.435 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.435 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.692 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:14:55.692 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.256 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.513 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.513 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.513 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.513 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.771 00:14:56.771 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.771 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.771 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.028 { 00:14:57.028 "cntlid": 35, 00:14:57.028 "qid": 0, 00:14:57.028 "state": "enabled", 00:14:57.028 "thread": "nvmf_tgt_poll_group_000", 00:14:57.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:57.028 "listen_address": { 00:14:57.028 "trtype": "TCP", 00:14:57.028 "adrfam": "IPv4", 00:14:57.028 "traddr": "10.0.0.2", 00:14:57.028 "trsvcid": "4420" 00:14:57.028 }, 00:14:57.028 "peer_address": { 00:14:57.028 "trtype": "TCP", 00:14:57.028 "adrfam": "IPv4", 00:14:57.028 "traddr": "10.0.0.1", 00:14:57.028 "trsvcid": "58754" 00:14:57.028 }, 00:14:57.028 "auth": { 00:14:57.028 "state": "completed", 00:14:57.028 "digest": "sha256", 00:14:57.028 "dhgroup": "ffdhe6144" 00:14:57.028 } 00:14:57.028 } 00:14:57.028 ]' 00:14:57.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:57.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.028 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.285 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:14:57.285 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:14:57.849 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.849 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:57.849 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.849 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.849 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.849 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.849 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:57.849 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:58.106 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:58.106 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.106 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:58.106 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:58.106 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:58.106 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.106 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.106 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.106 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.106 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.106 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.106 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.106 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.364 00:14:58.364 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.364 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.364 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.622 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.622 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.622 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.622 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.622 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.622 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.622 { 00:14:58.622 "cntlid": 37, 00:14:58.622 "qid": 0, 00:14:58.622 "state": "enabled", 00:14:58.622 "thread": "nvmf_tgt_poll_group_000", 00:14:58.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:14:58.622 "listen_address": { 00:14:58.622 "trtype": "TCP", 00:14:58.622 "adrfam": "IPv4", 00:14:58.622 "traddr": "10.0.0.2", 00:14:58.622 "trsvcid": "4420" 00:14:58.622 }, 00:14:58.622 "peer_address": { 00:14:58.622 "trtype": "TCP", 00:14:58.622 "adrfam": "IPv4", 00:14:58.622 "traddr": "10.0.0.1", 00:14:58.622 "trsvcid": "58768" 00:14:58.622 }, 00:14:58.622 "auth": { 00:14:58.622 "state": "completed", 00:14:58.622 "digest": "sha256", 00:14:58.622 "dhgroup": "ffdhe6144" 00:14:58.622 } 00:14:58.622 } 00:14:58.622 ]' 00:14:58.622 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.622 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.622 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.622 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:58.622 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.622 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.622 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.622 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.880 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:14:58.880 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:14:59.445 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.445 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:59.446 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.446 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.446 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.446 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.446 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:59.446 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:59.703 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:59.703 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.703 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:59.703 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:59.703 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:59.703 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.704 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:14:59.704 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.704 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.704 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.704 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:59.704 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.704 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.961 00:15:00.219 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.219 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.219 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.219 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.219 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.219 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.219 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.219 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.219 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.219 { 00:15:00.219 "cntlid": 39, 00:15:00.219 "qid": 0, 00:15:00.219 "state": "enabled", 00:15:00.219 "thread": "nvmf_tgt_poll_group_000", 00:15:00.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:00.219 "listen_address": { 00:15:00.219 "trtype": "TCP", 00:15:00.219 "adrfam": "IPv4", 00:15:00.219 "traddr": "10.0.0.2", 00:15:00.219 "trsvcid": "4420" 00:15:00.219 }, 00:15:00.219 "peer_address": { 00:15:00.219 "trtype": "TCP", 00:15:00.219 "adrfam": "IPv4", 00:15:00.219 "traddr": "10.0.0.1", 00:15:00.219 "trsvcid": "58782" 00:15:00.219 }, 00:15:00.219 "auth": { 00:15:00.219 "state": "completed", 00:15:00.219 "digest": "sha256", 00:15:00.219 "dhgroup": "ffdhe6144" 00:15:00.219 } 00:15:00.219 } 00:15:00.219 ]' 00:15:00.219 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.219 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.219 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.477 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:00.477 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.477 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.477 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.477 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.477 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:00.477 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:01.042 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.300 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.301 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.866 00:15:01.866 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.866 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.866 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.125 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.125 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.125 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.125 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.125 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.125 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.125 { 00:15:02.125 "cntlid": 41, 00:15:02.125 "qid": 0, 00:15:02.125 "state": "enabled", 00:15:02.125 "thread": "nvmf_tgt_poll_group_000", 00:15:02.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:02.125 "listen_address": { 00:15:02.125 "trtype": "TCP", 00:15:02.125 "adrfam": "IPv4", 00:15:02.125 "traddr": "10.0.0.2", 00:15:02.125 "trsvcid": "4420" 00:15:02.125 }, 00:15:02.125 "peer_address": { 00:15:02.125 "trtype": "TCP", 00:15:02.125 "adrfam": "IPv4", 00:15:02.125 "traddr": "10.0.0.1", 00:15:02.125 "trsvcid": "58812" 00:15:02.125 }, 00:15:02.125 "auth": { 00:15:02.125 "state": "completed", 00:15:02.125 "digest": "sha256", 00:15:02.125 "dhgroup": "ffdhe8192" 00:15:02.125 } 00:15:02.125 } 00:15:02.125 ]' 00:15:02.125 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.125 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.125 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.125 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:02.125 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.125 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.125 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.125 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.383 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:02.383 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:02.949 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.949 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:02.949 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.949 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.949 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.949 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.949 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:02.949 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:03.207 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:03.207 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.207 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.207 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:03.207 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:03.207 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.207 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.207 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.207 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.207 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.207 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.207 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.207 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.773 00:15:03.773 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.773 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.773 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.773 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.773 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.773 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.773 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.773 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.773 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.773 { 00:15:03.773 "cntlid": 43, 00:15:03.773 "qid": 0, 00:15:03.773 "state": "enabled", 00:15:03.773 "thread": "nvmf_tgt_poll_group_000", 00:15:03.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:03.773 "listen_address": { 00:15:03.773 "trtype": "TCP", 00:15:03.773 "adrfam": "IPv4", 00:15:03.773 "traddr": "10.0.0.2", 00:15:03.773 "trsvcid": "4420" 00:15:03.773 }, 00:15:03.773 "peer_address": { 00:15:03.773 "trtype": "TCP", 00:15:03.773 "adrfam": "IPv4", 00:15:03.773 "traddr": "10.0.0.1", 00:15:03.773 "trsvcid": "58846" 00:15:03.773 }, 00:15:03.773 "auth": { 00:15:03.773 "state": "completed", 00:15:03.773 "digest": "sha256", 00:15:03.773 "dhgroup": "ffdhe8192" 00:15:03.773 } 00:15:03.773 } 00:15:03.773 ]' 00:15:03.773 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.031 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.031 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.031 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:04.031 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.031 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.031 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.031 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.289 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:04.289 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:04.855 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.855 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:04.855 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.855 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.855 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.855 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.855 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.855 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.855 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:04.855 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.855 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.855 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:04.855 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:04.855 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.855 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.855 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.855 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.855 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.855 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.855 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.855 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.421 00:15:05.421 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.421 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.421 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.679 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.679 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.679 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.679 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.679 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.679 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.679 { 00:15:05.679 "cntlid": 45, 00:15:05.679 "qid": 0, 00:15:05.679 "state": "enabled", 00:15:05.679 "thread": "nvmf_tgt_poll_group_000", 00:15:05.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:05.679 "listen_address": { 00:15:05.679 "trtype": "TCP", 00:15:05.679 "adrfam": "IPv4", 00:15:05.679 "traddr": "10.0.0.2", 00:15:05.679 "trsvcid": "4420" 00:15:05.679 }, 00:15:05.679 "peer_address": { 00:15:05.679 "trtype": "TCP", 00:15:05.679 "adrfam": "IPv4", 00:15:05.679 "traddr": "10.0.0.1", 00:15:05.679 "trsvcid": "40140" 00:15:05.679 }, 00:15:05.679 "auth": { 00:15:05.679 "state": "completed", 00:15:05.679 "digest": "sha256", 00:15:05.679 "dhgroup": "ffdhe8192" 00:15:05.679 } 00:15:05.679 } 00:15:05.679 ]' 00:15:05.679 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.679 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.679 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.679 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:05.679 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.679 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.679 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.679 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.937 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:05.937 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:06.501 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.501 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:06.501 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.501 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.501 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.501 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.501 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:06.501 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:06.759 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:06.759 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.759 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:06.759 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:06.759 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:06.759 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.759 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:06.759 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.759 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.759 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.759 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:06.759 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.759 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:07.323 00:15:07.323 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.323 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.323 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.581 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.581 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.581 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.581 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.581 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.581 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.581 { 00:15:07.581 "cntlid": 47, 00:15:07.581 "qid": 0, 00:15:07.581 "state": "enabled", 00:15:07.581 "thread": "nvmf_tgt_poll_group_000", 00:15:07.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:07.581 "listen_address": { 00:15:07.581 "trtype": "TCP", 00:15:07.581 "adrfam": "IPv4", 00:15:07.581 "traddr": "10.0.0.2", 00:15:07.581 "trsvcid": "4420" 00:15:07.581 }, 00:15:07.581 "peer_address": { 00:15:07.581 "trtype": "TCP", 00:15:07.581 "adrfam": "IPv4", 00:15:07.581 "traddr": "10.0.0.1", 00:15:07.581 "trsvcid": "40180" 00:15:07.581 }, 00:15:07.581 "auth": { 00:15:07.581 "state": "completed", 00:15:07.581 "digest": "sha256", 00:15:07.581 "dhgroup": "ffdhe8192" 00:15:07.581 } 00:15:07.581 } 00:15:07.581 ]' 00:15:07.581 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.581 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.581 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.581 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:07.581 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.581 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.582 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.582 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.839 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:07.839 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:08.405 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.405 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:08.405 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.405 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.405 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.405 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:08.405 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.405 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.405 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:08.405 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:08.663 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:08.663 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.663 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:08.663 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:08.663 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:08.663 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.663 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.663 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.663 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.663 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.663 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.663 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.663 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.921 00:15:08.921 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.921 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.921 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.921 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.921 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.921 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.921 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.921 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.921 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.921 { 00:15:08.921 "cntlid": 49, 00:15:08.921 "qid": 0, 00:15:08.921 "state": "enabled", 00:15:08.921 "thread": "nvmf_tgt_poll_group_000", 00:15:08.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:08.921 "listen_address": { 00:15:08.921 "trtype": "TCP", 00:15:08.921 "adrfam": "IPv4", 00:15:08.921 "traddr": "10.0.0.2", 00:15:08.921 "trsvcid": "4420" 00:15:08.921 }, 00:15:08.921 "peer_address": { 00:15:08.921 "trtype": "TCP", 00:15:08.921 "adrfam": "IPv4", 00:15:08.921 "traddr": "10.0.0.1", 00:15:08.921 "trsvcid": "40204" 00:15:08.921 }, 00:15:08.921 "auth": { 00:15:08.921 "state": "completed", 00:15:08.921 "digest": "sha384", 00:15:08.921 "dhgroup": "null" 00:15:08.921 } 00:15:08.921 } 00:15:08.921 ]' 00:15:08.921 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.921 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.921 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.179 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:09.179 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.179 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.179 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.179 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.437 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:09.437 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.003 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.261 00:15:10.261 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.261 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.261 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.519 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.519 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.519 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.519 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.519 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.519 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.519 { 00:15:10.519 "cntlid": 51, 00:15:10.519 "qid": 0, 00:15:10.519 "state": "enabled", 00:15:10.519 "thread": "nvmf_tgt_poll_group_000", 00:15:10.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:10.519 "listen_address": { 00:15:10.519 "trtype": "TCP", 00:15:10.519 "adrfam": "IPv4", 00:15:10.519 "traddr": "10.0.0.2", 00:15:10.519 "trsvcid": "4420" 00:15:10.519 }, 00:15:10.519 "peer_address": { 00:15:10.519 "trtype": "TCP", 00:15:10.519 "adrfam": "IPv4", 00:15:10.519 "traddr": "10.0.0.1", 00:15:10.519 "trsvcid": "40216" 00:15:10.519 }, 00:15:10.519 "auth": { 00:15:10.519 "state": "completed", 00:15:10.519 "digest": "sha384", 00:15:10.519 "dhgroup": "null" 00:15:10.519 } 00:15:10.519 } 00:15:10.519 ]' 00:15:10.519 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.519 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.519 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.777 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:10.777 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.777 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.777 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.777 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.777 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:10.777 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:11.343 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.343 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:11.343 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.343 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.602 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.860 00:15:11.860 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.860 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.860 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.118 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.118 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.118 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.118 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.118 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.118 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.118 { 00:15:12.118 "cntlid": 53, 00:15:12.118 "qid": 0, 00:15:12.118 "state": "enabled", 00:15:12.118 "thread": "nvmf_tgt_poll_group_000", 00:15:12.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:12.118 "listen_address": { 00:15:12.118 "trtype": "TCP", 00:15:12.118 "adrfam": "IPv4", 00:15:12.118 "traddr": "10.0.0.2", 00:15:12.118 "trsvcid": "4420" 00:15:12.118 }, 00:15:12.118 "peer_address": { 00:15:12.118 "trtype": "TCP", 00:15:12.118 "adrfam": "IPv4", 00:15:12.118 "traddr": "10.0.0.1", 00:15:12.118 "trsvcid": "40240" 00:15:12.118 }, 00:15:12.118 "auth": { 00:15:12.118 "state": "completed", 00:15:12.118 "digest": "sha384", 00:15:12.118 "dhgroup": "null" 00:15:12.118 } 00:15:12.118 } 00:15:12.118 ]' 00:15:12.118 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.118 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.118 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.118 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:12.118 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.118 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.118 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.118 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.376 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:12.376 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:12.946 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.946 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:12.946 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.946 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.946 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.946 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.946 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:12.946 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.204 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:13.204 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.205 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:13.205 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:13.205 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:13.205 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.205 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:13.205 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.205 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.205 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.205 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:13.205 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.205 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.462 00:15:13.462 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.462 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.462 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.720 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.720 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.720 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.720 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.720 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.720 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.720 { 00:15:13.720 "cntlid": 55, 00:15:13.720 "qid": 0, 00:15:13.720 "state": "enabled", 00:15:13.720 "thread": "nvmf_tgt_poll_group_000", 00:15:13.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:13.720 "listen_address": { 00:15:13.720 "trtype": "TCP", 00:15:13.720 "adrfam": "IPv4", 00:15:13.720 "traddr": "10.0.0.2", 00:15:13.720 "trsvcid": "4420" 00:15:13.720 }, 00:15:13.720 "peer_address": { 00:15:13.720 "trtype": "TCP", 00:15:13.720 "adrfam": "IPv4", 00:15:13.720 "traddr": "10.0.0.1", 00:15:13.720 "trsvcid": "40264" 00:15:13.720 }, 00:15:13.720 "auth": { 00:15:13.720 "state": "completed", 00:15:13.720 "digest": "sha384", 00:15:13.720 "dhgroup": "null" 00:15:13.720 } 00:15:13.720 } 00:15:13.720 ]' 00:15:13.720 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.720 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.720 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.720 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:13.721 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.721 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.721 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.721 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.978 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:13.978 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:14.544 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.544 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:14.544 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.544 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.544 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.544 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:14.544 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.544 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:14.544 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:14.802 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:14.802 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.802 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:14.802 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:14.802 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:14.802 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.802 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.802 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.802 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.802 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.802 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.802 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.802 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.060 00:15:15.060 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.060 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.060 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.318 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.318 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.318 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.318 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.318 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.318 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.318 { 00:15:15.318 "cntlid": 57, 00:15:15.318 "qid": 0, 00:15:15.318 "state": "enabled", 00:15:15.318 "thread": "nvmf_tgt_poll_group_000", 00:15:15.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:15.318 "listen_address": { 00:15:15.318 "trtype": "TCP", 00:15:15.318 "adrfam": "IPv4", 00:15:15.318 "traddr": "10.0.0.2", 00:15:15.318 "trsvcid": "4420" 00:15:15.318 }, 00:15:15.318 "peer_address": { 00:15:15.318 "trtype": "TCP", 00:15:15.318 "adrfam": "IPv4", 00:15:15.318 "traddr": "10.0.0.1", 00:15:15.318 "trsvcid": "44128" 00:15:15.318 }, 00:15:15.318 "auth": { 00:15:15.318 "state": "completed", 00:15:15.318 "digest": "sha384", 00:15:15.318 "dhgroup": "ffdhe2048" 00:15:15.318 } 00:15:15.318 } 00:15:15.318 ]' 00:15:15.318 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.318 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.318 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.318 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:15.318 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.318 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.318 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.318 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.576 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:15.576 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:16.142 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.142 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:16.143 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.143 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.143 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.143 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.143 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:16.143 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:16.401 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:16.401 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.401 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:16.401 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:16.401 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:16.401 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.401 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.401 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.401 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.401 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.401 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.401 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.401 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.659 00:15:16.659 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.659 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.659 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.935 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.935 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.935 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.935 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.935 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.935 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.935 { 00:15:16.935 "cntlid": 59, 00:15:16.935 "qid": 0, 00:15:16.935 "state": "enabled", 00:15:16.935 "thread": "nvmf_tgt_poll_group_000", 00:15:16.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:16.935 "listen_address": { 00:15:16.935 "trtype": "TCP", 00:15:16.935 "adrfam": "IPv4", 00:15:16.935 "traddr": "10.0.0.2", 00:15:16.935 "trsvcid": "4420" 00:15:16.935 }, 00:15:16.935 "peer_address": { 00:15:16.935 "trtype": "TCP", 00:15:16.935 "adrfam": "IPv4", 00:15:16.935 "traddr": "10.0.0.1", 00:15:16.935 "trsvcid": "44156" 00:15:16.935 }, 00:15:16.935 "auth": { 00:15:16.935 "state": "completed", 00:15:16.935 "digest": "sha384", 00:15:16.935 "dhgroup": "ffdhe2048" 00:15:16.935 } 00:15:16.935 } 00:15:16.935 ]' 00:15:16.935 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.935 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.935 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.935 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:16.935 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.935 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.935 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.935 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.222 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:17.222 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:17.801 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.801 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:17.801 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.801 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.801 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.801 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.801 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:17.801 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:17.801 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:17.801 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.801 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:17.801 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:17.801 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:17.801 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.801 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.801 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.801 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.801 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.801 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.801 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.801 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.060 00:15:18.060 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.060 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.060 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.318 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.318 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.318 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.318 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.318 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.318 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.318 { 00:15:18.318 "cntlid": 61, 00:15:18.318 "qid": 0, 00:15:18.318 "state": "enabled", 00:15:18.318 "thread": "nvmf_tgt_poll_group_000", 00:15:18.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:18.318 "listen_address": { 00:15:18.318 "trtype": "TCP", 00:15:18.318 "adrfam": "IPv4", 00:15:18.318 "traddr": "10.0.0.2", 00:15:18.318 "trsvcid": "4420" 00:15:18.318 }, 00:15:18.318 "peer_address": { 00:15:18.318 "trtype": "TCP", 00:15:18.318 "adrfam": "IPv4", 00:15:18.318 "traddr": "10.0.0.1", 00:15:18.318 "trsvcid": "44192" 00:15:18.318 }, 00:15:18.318 "auth": { 00:15:18.318 "state": "completed", 00:15:18.318 "digest": "sha384", 00:15:18.318 "dhgroup": "ffdhe2048" 00:15:18.318 } 00:15:18.318 } 00:15:18.318 ]' 00:15:18.318 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.318 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.318 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.577 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:18.577 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.577 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.577 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.577 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.577 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:18.577 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:19.143 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.143 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:19.143 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.143 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.143 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.143 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.143 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:19.143 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:19.402 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:19.402 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.402 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:19.402 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:19.402 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:19.402 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.402 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:19.402 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.402 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.402 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.402 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:19.402 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.402 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.660 00:15:19.660 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.660 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.660 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.919 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.919 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.919 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.919 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.919 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.919 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.919 { 00:15:19.919 "cntlid": 63, 00:15:19.919 "qid": 0, 00:15:19.919 "state": "enabled", 00:15:19.919 "thread": "nvmf_tgt_poll_group_000", 00:15:19.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:19.919 "listen_address": { 00:15:19.919 "trtype": "TCP", 00:15:19.919 "adrfam": "IPv4", 00:15:19.919 "traddr": "10.0.0.2", 00:15:19.919 "trsvcid": "4420" 00:15:19.919 }, 00:15:19.919 "peer_address": { 00:15:19.919 "trtype": "TCP", 00:15:19.919 "adrfam": "IPv4", 00:15:19.919 "traddr": "10.0.0.1", 00:15:19.919 "trsvcid": "44216" 00:15:19.919 }, 00:15:19.919 "auth": { 00:15:19.919 "state": "completed", 00:15:19.919 "digest": "sha384", 00:15:19.919 "dhgroup": "ffdhe2048" 00:15:19.919 } 00:15:19.919 } 00:15:19.919 ]' 00:15:19.919 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.919 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.919 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.919 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:19.919 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.919 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.919 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.920 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.178 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:20.178 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:20.745 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.745 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:20.745 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.745 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.745 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.745 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:20.745 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.745 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:20.745 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:21.003 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:21.003 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.003 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:21.004 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:21.004 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:21.004 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.004 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.004 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.004 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.004 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.004 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.004 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.004 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.262 00:15:21.262 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.262 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.262 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.521 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.521 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.521 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.521 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.521 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.521 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.521 { 00:15:21.521 "cntlid": 65, 00:15:21.521 "qid": 0, 00:15:21.521 "state": "enabled", 00:15:21.521 "thread": "nvmf_tgt_poll_group_000", 00:15:21.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:21.521 "listen_address": { 00:15:21.521 "trtype": "TCP", 00:15:21.521 "adrfam": "IPv4", 00:15:21.521 "traddr": "10.0.0.2", 00:15:21.521 "trsvcid": "4420" 00:15:21.521 }, 00:15:21.521 "peer_address": { 00:15:21.521 "trtype": "TCP", 00:15:21.521 "adrfam": "IPv4", 00:15:21.521 "traddr": "10.0.0.1", 00:15:21.521 "trsvcid": "44242" 00:15:21.521 }, 00:15:21.521 "auth": { 00:15:21.521 "state": "completed", 00:15:21.521 "digest": "sha384", 00:15:21.521 "dhgroup": "ffdhe3072" 00:15:21.521 } 00:15:21.521 } 00:15:21.521 ]' 00:15:21.521 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.521 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.521 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.521 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:21.521 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.521 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.521 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.521 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.780 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:21.780 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:22.346 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.346 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:22.346 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.346 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.346 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.346 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.346 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:22.346 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:22.605 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:22.605 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.605 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:22.605 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:22.605 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:22.605 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.605 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.605 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.605 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.605 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.605 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.605 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.605 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.865 00:15:22.865 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.865 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.865 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.125 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.125 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.125 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.125 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.125 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.125 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.125 { 00:15:23.125 "cntlid": 67, 00:15:23.125 "qid": 0, 00:15:23.125 "state": "enabled", 00:15:23.125 "thread": "nvmf_tgt_poll_group_000", 00:15:23.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:23.125 "listen_address": { 00:15:23.125 "trtype": "TCP", 00:15:23.125 "adrfam": "IPv4", 00:15:23.125 "traddr": "10.0.0.2", 00:15:23.125 "trsvcid": "4420" 00:15:23.125 }, 00:15:23.125 "peer_address": { 00:15:23.125 "trtype": "TCP", 00:15:23.125 "adrfam": "IPv4", 00:15:23.125 "traddr": "10.0.0.1", 00:15:23.125 "trsvcid": "44274" 00:15:23.125 }, 00:15:23.125 "auth": { 00:15:23.125 "state": "completed", 00:15:23.125 "digest": "sha384", 00:15:23.125 "dhgroup": "ffdhe3072" 00:15:23.125 } 00:15:23.125 } 00:15:23.125 ]' 00:15:23.125 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.125 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.125 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.125 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:23.125 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.125 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.125 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.125 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.383 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:23.383 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:23.949 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.949 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:23.949 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.949 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.949 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.949 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.949 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:23.949 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:24.207 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:24.207 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.207 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:24.207 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:24.207 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:24.207 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.207 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.207 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.207 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.207 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.207 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.207 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.207 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.465 00:15:24.465 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.465 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.465 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.465 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.465 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.465 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.465 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.465 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.465 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.465 { 00:15:24.465 "cntlid": 69, 00:15:24.465 "qid": 0, 00:15:24.465 "state": "enabled", 00:15:24.465 "thread": "nvmf_tgt_poll_group_000", 00:15:24.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:24.465 "listen_address": { 00:15:24.465 "trtype": "TCP", 00:15:24.465 "adrfam": "IPv4", 00:15:24.465 "traddr": "10.0.0.2", 00:15:24.465 "trsvcid": "4420" 00:15:24.465 }, 00:15:24.465 "peer_address": { 00:15:24.465 "trtype": "TCP", 00:15:24.465 "adrfam": "IPv4", 00:15:24.465 "traddr": "10.0.0.1", 00:15:24.465 "trsvcid": "58668" 00:15:24.465 }, 00:15:24.465 "auth": { 00:15:24.465 "state": "completed", 00:15:24.465 "digest": "sha384", 00:15:24.465 "dhgroup": "ffdhe3072" 00:15:24.465 } 00:15:24.465 } 00:15:24.465 ]' 00:15:24.465 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.723 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.723 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.723 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.723 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.723 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.723 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.723 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.980 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:24.980 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.547 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.805 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.805 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:25.805 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.805 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.805 00:15:26.063 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.063 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.063 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.063 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.063 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.063 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.063 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.063 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.063 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.063 { 00:15:26.063 "cntlid": 71, 00:15:26.063 "qid": 0, 00:15:26.063 "state": "enabled", 00:15:26.063 "thread": "nvmf_tgt_poll_group_000", 00:15:26.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:26.063 "listen_address": { 00:15:26.063 "trtype": "TCP", 00:15:26.063 "adrfam": "IPv4", 00:15:26.063 "traddr": "10.0.0.2", 00:15:26.063 "trsvcid": "4420" 00:15:26.063 }, 00:15:26.063 "peer_address": { 00:15:26.063 "trtype": "TCP", 00:15:26.063 "adrfam": "IPv4", 00:15:26.063 "traddr": "10.0.0.1", 00:15:26.063 "trsvcid": "58686" 00:15:26.063 }, 00:15:26.063 "auth": { 00:15:26.063 "state": "completed", 00:15:26.063 "digest": "sha384", 00:15:26.063 "dhgroup": "ffdhe3072" 00:15:26.063 } 00:15:26.063 } 00:15:26.063 ]' 00:15:26.063 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.063 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.063 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.321 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:26.321 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.321 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.321 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.321 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.580 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:26.580 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.146 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.405 00:15:27.663 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.663 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.663 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.663 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.663 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.663 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.663 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.663 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.663 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.663 { 00:15:27.663 "cntlid": 73, 00:15:27.663 "qid": 0, 00:15:27.663 "state": "enabled", 00:15:27.663 "thread": "nvmf_tgt_poll_group_000", 00:15:27.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:27.663 "listen_address": { 00:15:27.663 "trtype": "TCP", 00:15:27.663 "adrfam": "IPv4", 00:15:27.663 "traddr": "10.0.0.2", 00:15:27.663 "trsvcid": "4420" 00:15:27.663 }, 00:15:27.663 "peer_address": { 00:15:27.663 "trtype": "TCP", 00:15:27.663 "adrfam": "IPv4", 00:15:27.663 "traddr": "10.0.0.1", 00:15:27.663 "trsvcid": "58704" 00:15:27.663 }, 00:15:27.663 "auth": { 00:15:27.663 "state": "completed", 00:15:27.663 "digest": "sha384", 00:15:27.663 "dhgroup": "ffdhe4096" 00:15:27.663 } 00:15:27.663 } 00:15:27.663 ]' 00:15:27.663 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.663 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.663 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.921 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:27.921 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.921 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.921 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.921 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.178 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:28.178 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:28.745 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.745 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:28.745 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.745 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.745 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.745 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.745 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:28.745 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:28.745 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:28.745 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.745 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:28.745 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:28.745 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:28.745 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.745 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.745 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.745 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.745 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.745 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.745 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.746 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.004 00:15:29.004 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.004 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.004 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.262 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.262 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.262 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.263 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.263 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.263 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.263 { 00:15:29.263 "cntlid": 75, 00:15:29.263 "qid": 0, 00:15:29.263 "state": "enabled", 00:15:29.263 "thread": "nvmf_tgt_poll_group_000", 00:15:29.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:29.263 "listen_address": { 00:15:29.263 "trtype": "TCP", 00:15:29.263 "adrfam": "IPv4", 00:15:29.263 "traddr": "10.0.0.2", 00:15:29.263 "trsvcid": "4420" 00:15:29.263 }, 00:15:29.263 "peer_address": { 00:15:29.263 "trtype": "TCP", 00:15:29.263 "adrfam": "IPv4", 00:15:29.263 "traddr": "10.0.0.1", 00:15:29.263 "trsvcid": "58744" 00:15:29.263 }, 00:15:29.263 "auth": { 00:15:29.263 "state": "completed", 00:15:29.263 "digest": "sha384", 00:15:29.263 "dhgroup": "ffdhe4096" 00:15:29.263 } 00:15:29.263 } 00:15:29.263 ]' 00:15:29.263 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.263 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.263 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.521 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:29.521 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.521 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.521 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.521 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.521 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:29.521 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:30.088 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.347 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.606 00:15:30.606 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.606 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.606 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.864 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.864 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.864 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.864 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.864 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.864 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.864 { 00:15:30.864 "cntlid": 77, 00:15:30.864 "qid": 0, 00:15:30.864 "state": "enabled", 00:15:30.864 "thread": "nvmf_tgt_poll_group_000", 00:15:30.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:30.864 "listen_address": { 00:15:30.864 "trtype": "TCP", 00:15:30.864 "adrfam": "IPv4", 00:15:30.864 "traddr": "10.0.0.2", 00:15:30.864 "trsvcid": "4420" 00:15:30.864 }, 00:15:30.864 "peer_address": { 00:15:30.864 "trtype": "TCP", 00:15:30.864 "adrfam": "IPv4", 00:15:30.864 "traddr": "10.0.0.1", 00:15:30.864 "trsvcid": "58772" 00:15:30.864 }, 00:15:30.864 "auth": { 00:15:30.864 "state": "completed", 00:15:30.864 "digest": "sha384", 00:15:30.864 "dhgroup": "ffdhe4096" 00:15:30.864 } 00:15:30.864 } 00:15:30.864 ]' 00:15:30.864 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.864 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.864 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.122 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:31.122 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.122 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.122 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.122 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.122 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:31.122 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:31.689 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.689 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:31.689 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.689 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.689 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.689 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.689 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:31.689 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:31.948 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:31.948 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.948 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:31.948 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:31.948 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:31.948 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.948 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:31.948 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.948 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.948 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.948 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.948 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.948 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.206 00:15:32.206 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.206 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.206 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.465 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.465 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.465 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.465 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.465 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.465 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.465 { 00:15:32.465 "cntlid": 79, 00:15:32.465 "qid": 0, 00:15:32.465 "state": "enabled", 00:15:32.465 "thread": "nvmf_tgt_poll_group_000", 00:15:32.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:32.465 "listen_address": { 00:15:32.465 "trtype": "TCP", 00:15:32.465 "adrfam": "IPv4", 00:15:32.465 "traddr": "10.0.0.2", 00:15:32.465 "trsvcid": "4420" 00:15:32.465 }, 00:15:32.465 "peer_address": { 00:15:32.465 "trtype": "TCP", 00:15:32.465 "adrfam": "IPv4", 00:15:32.465 "traddr": "10.0.0.1", 00:15:32.465 "trsvcid": "58806" 00:15:32.465 }, 00:15:32.465 "auth": { 00:15:32.465 "state": "completed", 00:15:32.465 "digest": "sha384", 00:15:32.465 "dhgroup": "ffdhe4096" 00:15:32.465 } 00:15:32.465 } 00:15:32.465 ]' 00:15:32.465 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.465 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.465 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.465 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:32.465 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.723 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.723 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.723 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.723 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:32.723 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:33.288 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.288 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:33.288 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.288 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.288 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.288 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.288 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.288 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.288 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.547 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:33.547 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.547 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:33.547 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:33.547 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:33.547 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.547 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.547 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.547 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.547 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.547 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.547 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.547 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.805 00:15:33.805 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.805 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.805 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.064 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.064 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.064 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.064 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.064 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.064 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.064 { 00:15:34.064 "cntlid": 81, 00:15:34.064 "qid": 0, 00:15:34.064 "state": "enabled", 00:15:34.064 "thread": "nvmf_tgt_poll_group_000", 00:15:34.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:34.064 "listen_address": { 00:15:34.064 "trtype": "TCP", 00:15:34.064 "adrfam": "IPv4", 00:15:34.064 "traddr": "10.0.0.2", 00:15:34.064 "trsvcid": "4420" 00:15:34.064 }, 00:15:34.064 "peer_address": { 00:15:34.064 "trtype": "TCP", 00:15:34.064 "adrfam": "IPv4", 00:15:34.064 "traddr": "10.0.0.1", 00:15:34.064 "trsvcid": "58838" 00:15:34.064 }, 00:15:34.064 "auth": { 00:15:34.064 "state": "completed", 00:15:34.064 "digest": "sha384", 00:15:34.064 "dhgroup": "ffdhe6144" 00:15:34.064 } 00:15:34.064 } 00:15:34.064 ]' 00:15:34.064 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.064 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.064 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.322 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.322 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.322 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.322 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.322 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.322 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:34.323 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:34.888 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.888 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:34.888 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.888 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.888 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.888 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.888 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.888 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:35.147 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:35.147 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.147 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:35.147 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:35.147 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:35.147 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.147 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.147 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.147 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.147 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.147 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.147 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.147 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.405 00:15:35.664 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.664 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.664 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.664 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.664 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.664 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.664 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.664 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.664 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.664 { 00:15:35.664 "cntlid": 83, 00:15:35.664 "qid": 0, 00:15:35.664 "state": "enabled", 00:15:35.664 "thread": "nvmf_tgt_poll_group_000", 00:15:35.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:35.664 "listen_address": { 00:15:35.664 "trtype": "TCP", 00:15:35.664 "adrfam": "IPv4", 00:15:35.664 "traddr": "10.0.0.2", 00:15:35.664 "trsvcid": "4420" 00:15:35.664 }, 00:15:35.664 "peer_address": { 00:15:35.664 "trtype": "TCP", 00:15:35.664 "adrfam": "IPv4", 00:15:35.664 "traddr": "10.0.0.1", 00:15:35.664 "trsvcid": "35956" 00:15:35.664 }, 00:15:35.664 "auth": { 00:15:35.664 "state": "completed", 00:15:35.664 "digest": "sha384", 00:15:35.664 "dhgroup": "ffdhe6144" 00:15:35.664 } 00:15:35.664 } 00:15:35.664 ]' 00:15:35.664 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.664 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.664 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.922 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.922 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.922 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.922 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.922 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.181 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:36.181 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:36.748 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.748 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:36.748 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.748 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.748 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.748 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.748 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.748 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.748 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:36.748 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.748 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:36.748 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:36.748 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:36.748 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.748 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.748 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.748 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.748 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.748 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.748 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.748 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.315 00:15:37.315 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.315 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.315 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.315 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.315 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.315 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.315 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.315 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.315 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.315 { 00:15:37.315 "cntlid": 85, 00:15:37.315 "qid": 0, 00:15:37.315 "state": "enabled", 00:15:37.315 "thread": "nvmf_tgt_poll_group_000", 00:15:37.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:37.315 "listen_address": { 00:15:37.315 "trtype": "TCP", 00:15:37.315 "adrfam": "IPv4", 00:15:37.315 "traddr": "10.0.0.2", 00:15:37.315 "trsvcid": "4420" 00:15:37.315 }, 00:15:37.315 "peer_address": { 00:15:37.315 "trtype": "TCP", 00:15:37.315 "adrfam": "IPv4", 00:15:37.315 "traddr": "10.0.0.1", 00:15:37.315 "trsvcid": "35984" 00:15:37.315 }, 00:15:37.315 "auth": { 00:15:37.315 "state": "completed", 00:15:37.315 "digest": "sha384", 00:15:37.315 "dhgroup": "ffdhe6144" 00:15:37.315 } 00:15:37.315 } 00:15:37.315 ]' 00:15:37.315 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.315 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.315 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.573 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:37.573 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.573 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.573 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.573 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.573 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:37.573 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:38.140 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.140 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:38.140 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.140 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.399 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.966 00:15:38.966 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.966 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.966 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.966 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.966 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.966 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.966 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.966 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.966 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.966 { 00:15:38.966 "cntlid": 87, 00:15:38.966 "qid": 0, 00:15:38.966 "state": "enabled", 00:15:38.966 "thread": "nvmf_tgt_poll_group_000", 00:15:38.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:38.966 "listen_address": { 00:15:38.966 "trtype": "TCP", 00:15:38.966 "adrfam": "IPv4", 00:15:38.966 "traddr": "10.0.0.2", 00:15:38.966 "trsvcid": "4420" 00:15:38.966 }, 00:15:38.966 "peer_address": { 00:15:38.966 "trtype": "TCP", 00:15:38.966 "adrfam": "IPv4", 00:15:38.966 "traddr": "10.0.0.1", 00:15:38.966 "trsvcid": "36020" 00:15:38.966 }, 00:15:38.966 "auth": { 00:15:38.966 "state": "completed", 00:15:38.966 "digest": "sha384", 00:15:38.966 "dhgroup": "ffdhe6144" 00:15:38.966 } 00:15:38.966 } 00:15:38.966 ]' 00:15:38.966 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.966 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.966 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.966 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.966 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.224 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.224 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.224 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.224 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:39.224 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:39.790 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.790 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:39.790 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.790 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.790 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.790 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.790 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.790 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:39.790 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.048 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:40.048 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.048 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:40.048 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:40.048 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:40.048 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.048 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.048 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.048 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.048 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.048 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.048 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.048 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.614 00:15:40.614 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.614 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.614 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.872 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.872 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.872 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.872 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.872 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.872 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.872 { 00:15:40.872 "cntlid": 89, 00:15:40.872 "qid": 0, 00:15:40.872 "state": "enabled", 00:15:40.872 "thread": "nvmf_tgt_poll_group_000", 00:15:40.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:40.872 "listen_address": { 00:15:40.872 "trtype": "TCP", 00:15:40.872 "adrfam": "IPv4", 00:15:40.872 "traddr": "10.0.0.2", 00:15:40.872 "trsvcid": "4420" 00:15:40.872 }, 00:15:40.872 "peer_address": { 00:15:40.872 "trtype": "TCP", 00:15:40.872 "adrfam": "IPv4", 00:15:40.872 "traddr": "10.0.0.1", 00:15:40.872 "trsvcid": "36044" 00:15:40.872 }, 00:15:40.872 "auth": { 00:15:40.872 "state": "completed", 00:15:40.872 "digest": "sha384", 00:15:40.872 "dhgroup": "ffdhe8192" 00:15:40.872 } 00:15:40.872 } 00:15:40.872 ]' 00:15:40.872 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.872 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.872 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.872 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:40.872 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.872 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.872 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.872 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.130 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:41.130 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:41.696 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.696 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:41.696 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.696 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.696 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.696 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.696 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.696 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.954 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:41.954 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.954 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:41.954 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:41.954 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:41.954 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.954 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.954 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.954 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.954 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.954 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.954 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.954 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.520 00:15:42.520 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.520 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.520 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.520 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.520 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.520 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.520 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.520 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.520 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.520 { 00:15:42.520 "cntlid": 91, 00:15:42.520 "qid": 0, 00:15:42.520 "state": "enabled", 00:15:42.520 "thread": "nvmf_tgt_poll_group_000", 00:15:42.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:42.520 "listen_address": { 00:15:42.520 "trtype": "TCP", 00:15:42.520 "adrfam": "IPv4", 00:15:42.520 "traddr": "10.0.0.2", 00:15:42.520 "trsvcid": "4420" 00:15:42.520 }, 00:15:42.520 "peer_address": { 00:15:42.520 "trtype": "TCP", 00:15:42.520 "adrfam": "IPv4", 00:15:42.520 "traddr": "10.0.0.1", 00:15:42.520 "trsvcid": "36074" 00:15:42.520 }, 00:15:42.520 "auth": { 00:15:42.520 "state": "completed", 00:15:42.520 "digest": "sha384", 00:15:42.520 "dhgroup": "ffdhe8192" 00:15:42.520 } 00:15:42.520 } 00:15:42.520 ]' 00:15:42.520 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.520 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.520 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.777 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.777 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.778 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.778 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.778 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.778 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:42.778 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:43.342 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.600 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.601 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.601 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.165 00:15:44.165 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.165 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.165 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.423 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.423 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.423 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.423 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.423 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.423 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.423 { 00:15:44.423 "cntlid": 93, 00:15:44.423 "qid": 0, 00:15:44.423 "state": "enabled", 00:15:44.423 "thread": "nvmf_tgt_poll_group_000", 00:15:44.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:44.423 "listen_address": { 00:15:44.423 "trtype": "TCP", 00:15:44.423 "adrfam": "IPv4", 00:15:44.423 "traddr": "10.0.0.2", 00:15:44.423 "trsvcid": "4420" 00:15:44.423 }, 00:15:44.423 "peer_address": { 00:15:44.423 "trtype": "TCP", 00:15:44.423 "adrfam": "IPv4", 00:15:44.423 "traddr": "10.0.0.1", 00:15:44.423 "trsvcid": "36088" 00:15:44.423 }, 00:15:44.423 "auth": { 00:15:44.423 "state": "completed", 00:15:44.423 "digest": "sha384", 00:15:44.423 "dhgroup": "ffdhe8192" 00:15:44.423 } 00:15:44.423 } 00:15:44.423 ]' 00:15:44.423 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.423 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.423 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.423 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:44.423 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.423 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.423 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.423 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.681 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:44.681 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:45.246 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.246 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:45.246 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.246 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.246 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.246 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.246 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:45.246 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:45.504 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:45.504 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.504 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.504 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:45.504 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:45.504 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.504 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:45.504 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.504 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.504 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.504 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:45.504 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.504 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.070 00:15:46.070 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.070 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.070 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.070 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.070 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.070 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.070 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.070 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.070 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.070 { 00:15:46.070 "cntlid": 95, 00:15:46.070 "qid": 0, 00:15:46.070 "state": "enabled", 00:15:46.070 "thread": "nvmf_tgt_poll_group_000", 00:15:46.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:46.070 "listen_address": { 00:15:46.070 "trtype": "TCP", 00:15:46.070 "adrfam": "IPv4", 00:15:46.070 "traddr": "10.0.0.2", 00:15:46.070 "trsvcid": "4420" 00:15:46.070 }, 00:15:46.070 "peer_address": { 00:15:46.070 "trtype": "TCP", 00:15:46.070 "adrfam": "IPv4", 00:15:46.070 "traddr": "10.0.0.1", 00:15:46.070 "trsvcid": "34350" 00:15:46.070 }, 00:15:46.070 "auth": { 00:15:46.070 "state": "completed", 00:15:46.070 "digest": "sha384", 00:15:46.070 "dhgroup": "ffdhe8192" 00:15:46.070 } 00:15:46.070 } 00:15:46.070 ]' 00:15:46.070 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.070 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.070 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.328 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:46.328 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.328 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.328 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.328 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.329 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:46.329 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:46.894 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.894 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:46.894 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.894 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.894 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.894 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:46.894 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.894 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.894 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:46.894 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:47.152 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:47.152 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.152 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:47.152 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:47.152 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:47.152 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.152 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.152 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.152 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.152 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.152 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.152 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.152 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.410 00:15:47.410 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.410 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.410 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.668 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.668 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.668 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.668 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.668 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.668 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.668 { 00:15:47.668 "cntlid": 97, 00:15:47.668 "qid": 0, 00:15:47.668 "state": "enabled", 00:15:47.668 "thread": "nvmf_tgt_poll_group_000", 00:15:47.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:47.668 "listen_address": { 00:15:47.668 "trtype": "TCP", 00:15:47.668 "adrfam": "IPv4", 00:15:47.668 "traddr": "10.0.0.2", 00:15:47.668 "trsvcid": "4420" 00:15:47.668 }, 00:15:47.668 "peer_address": { 00:15:47.668 "trtype": "TCP", 00:15:47.668 "adrfam": "IPv4", 00:15:47.668 "traddr": "10.0.0.1", 00:15:47.668 "trsvcid": "34380" 00:15:47.668 }, 00:15:47.668 "auth": { 00:15:47.668 "state": "completed", 00:15:47.668 "digest": "sha512", 00:15:47.668 "dhgroup": "null" 00:15:47.668 } 00:15:47.668 } 00:15:47.668 ]' 00:15:47.668 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.668 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.668 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.668 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:47.668 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.668 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.668 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.668 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.926 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:47.926 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:48.492 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.492 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:48.492 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.492 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.492 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.492 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.492 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:48.492 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:48.750 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:48.750 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.750 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:48.750 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:48.750 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:48.750 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.750 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.750 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.750 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.750 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.750 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.750 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.750 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.007 00:15:49.007 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.007 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.008 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.265 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.265 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.266 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.266 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.266 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.266 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.266 { 00:15:49.266 "cntlid": 99, 00:15:49.266 "qid": 0, 00:15:49.266 "state": "enabled", 00:15:49.266 "thread": "nvmf_tgt_poll_group_000", 00:15:49.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:49.266 "listen_address": { 00:15:49.266 "trtype": "TCP", 00:15:49.266 "adrfam": "IPv4", 00:15:49.266 "traddr": "10.0.0.2", 00:15:49.266 "trsvcid": "4420" 00:15:49.266 }, 00:15:49.266 "peer_address": { 00:15:49.266 "trtype": "TCP", 00:15:49.266 "adrfam": "IPv4", 00:15:49.266 "traddr": "10.0.0.1", 00:15:49.266 "trsvcid": "34424" 00:15:49.266 }, 00:15:49.266 "auth": { 00:15:49.266 "state": "completed", 00:15:49.266 "digest": "sha512", 00:15:49.266 "dhgroup": "null" 00:15:49.266 } 00:15:49.266 } 00:15:49.266 ]' 00:15:49.266 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.266 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.266 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.266 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.266 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.266 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.266 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.266 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.524 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:49.524 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:50.090 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.090 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:50.090 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.090 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.090 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.090 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.090 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.090 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.348 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:50.348 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.348 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:50.348 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:50.348 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:50.348 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.348 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.348 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.348 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.348 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.348 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.348 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.348 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.605 00:15:50.605 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.605 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.605 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.605 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.605 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.605 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.605 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.605 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.605 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.605 { 00:15:50.605 "cntlid": 101, 00:15:50.605 "qid": 0, 00:15:50.605 "state": "enabled", 00:15:50.605 "thread": "nvmf_tgt_poll_group_000", 00:15:50.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:50.605 "listen_address": { 00:15:50.605 "trtype": "TCP", 00:15:50.605 "adrfam": "IPv4", 00:15:50.605 "traddr": "10.0.0.2", 00:15:50.605 "trsvcid": "4420" 00:15:50.605 }, 00:15:50.605 "peer_address": { 00:15:50.605 "trtype": "TCP", 00:15:50.605 "adrfam": "IPv4", 00:15:50.605 "traddr": "10.0.0.1", 00:15:50.605 "trsvcid": "34458" 00:15:50.605 }, 00:15:50.605 "auth": { 00:15:50.605 "state": "completed", 00:15:50.605 "digest": "sha512", 00:15:50.605 "dhgroup": "null" 00:15:50.605 } 00:15:50.605 } 00:15:50.605 ]' 00:15:50.605 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.863 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.863 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.863 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:50.863 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.863 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.863 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.863 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.121 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:51.121 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:51.687 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.687 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:51.687 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.687 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.687 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.687 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.687 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.687 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.945 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:51.945 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.945 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:51.945 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:51.945 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:51.945 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.945 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:51.945 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.945 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.945 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.945 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:51.945 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.945 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.203 00:15:52.203 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.203 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.203 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.203 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.203 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.203 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.203 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.203 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.461 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.461 { 00:15:52.461 "cntlid": 103, 00:15:52.461 "qid": 0, 00:15:52.461 "state": "enabled", 00:15:52.461 "thread": "nvmf_tgt_poll_group_000", 00:15:52.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:52.461 "listen_address": { 00:15:52.461 "trtype": "TCP", 00:15:52.461 "adrfam": "IPv4", 00:15:52.461 "traddr": "10.0.0.2", 00:15:52.461 "trsvcid": "4420" 00:15:52.461 }, 00:15:52.461 "peer_address": { 00:15:52.461 "trtype": "TCP", 00:15:52.461 "adrfam": "IPv4", 00:15:52.461 "traddr": "10.0.0.1", 00:15:52.461 "trsvcid": "34492" 00:15:52.461 }, 00:15:52.461 "auth": { 00:15:52.461 "state": "completed", 00:15:52.461 "digest": "sha512", 00:15:52.461 "dhgroup": "null" 00:15:52.461 } 00:15:52.461 } 00:15:52.461 ]' 00:15:52.461 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.461 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.461 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.461 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:52.461 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.461 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.461 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.461 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.719 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:52.719 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:53.283 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.283 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:53.283 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.283 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.283 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.283 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.283 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.283 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:53.283 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:53.541 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:53.541 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.541 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:53.541 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:53.541 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:53.541 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.541 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.541 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.541 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.541 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.541 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.541 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.541 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.799 00:15:53.799 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.799 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.799 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.799 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.799 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.799 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.799 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.799 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.799 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.799 { 00:15:53.799 "cntlid": 105, 00:15:53.799 "qid": 0, 00:15:53.799 "state": "enabled", 00:15:53.799 "thread": "nvmf_tgt_poll_group_000", 00:15:53.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:53.799 "listen_address": { 00:15:53.799 "trtype": "TCP", 00:15:53.799 "adrfam": "IPv4", 00:15:53.799 "traddr": "10.0.0.2", 00:15:53.799 "trsvcid": "4420" 00:15:53.799 }, 00:15:53.799 "peer_address": { 00:15:53.799 "trtype": "TCP", 00:15:53.799 "adrfam": "IPv4", 00:15:53.799 "traddr": "10.0.0.1", 00:15:53.799 "trsvcid": "34522" 00:15:53.799 }, 00:15:53.799 "auth": { 00:15:53.799 "state": "completed", 00:15:53.799 "digest": "sha512", 00:15:53.799 "dhgroup": "ffdhe2048" 00:15:53.799 } 00:15:53.799 } 00:15:53.799 ]' 00:15:53.799 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.056 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.056 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.056 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.056 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.056 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.056 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.056 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.314 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:54.314 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:15:54.882 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.882 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:54.882 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.882 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.882 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.882 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.882 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:54.882 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:55.160 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:55.160 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.160 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:55.160 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:55.160 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:55.160 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.160 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.160 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.160 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.160 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.160 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.160 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.160 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.160 00:15:55.424 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.424 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.424 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.424 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.424 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.424 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.424 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.424 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.424 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.424 { 00:15:55.424 "cntlid": 107, 00:15:55.424 "qid": 0, 00:15:55.424 "state": "enabled", 00:15:55.424 "thread": "nvmf_tgt_poll_group_000", 00:15:55.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:55.424 "listen_address": { 00:15:55.424 "trtype": "TCP", 00:15:55.424 "adrfam": "IPv4", 00:15:55.424 "traddr": "10.0.0.2", 00:15:55.424 "trsvcid": "4420" 00:15:55.424 }, 00:15:55.424 "peer_address": { 00:15:55.424 "trtype": "TCP", 00:15:55.424 "adrfam": "IPv4", 00:15:55.424 "traddr": "10.0.0.1", 00:15:55.424 "trsvcid": "44096" 00:15:55.424 }, 00:15:55.424 "auth": { 00:15:55.424 "state": "completed", 00:15:55.424 "digest": "sha512", 00:15:55.424 "dhgroup": "ffdhe2048" 00:15:55.424 } 00:15:55.424 } 00:15:55.424 ]' 00:15:55.424 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.424 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.424 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.695 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:55.695 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.695 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.695 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.695 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.695 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:55.695 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:15:56.272 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.530 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.788 00:15:56.788 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.788 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.788 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.046 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.046 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.046 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.046 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.046 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.046 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.046 { 00:15:57.046 "cntlid": 109, 00:15:57.046 "qid": 0, 00:15:57.046 "state": "enabled", 00:15:57.046 "thread": "nvmf_tgt_poll_group_000", 00:15:57.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:57.046 "listen_address": { 00:15:57.046 "trtype": "TCP", 00:15:57.046 "adrfam": "IPv4", 00:15:57.046 "traddr": "10.0.0.2", 00:15:57.046 "trsvcid": "4420" 00:15:57.046 }, 00:15:57.046 "peer_address": { 00:15:57.046 "trtype": "TCP", 00:15:57.046 "adrfam": "IPv4", 00:15:57.046 "traddr": "10.0.0.1", 00:15:57.046 "trsvcid": "44110" 00:15:57.046 }, 00:15:57.046 "auth": { 00:15:57.046 "state": "completed", 00:15:57.046 "digest": "sha512", 00:15:57.046 "dhgroup": "ffdhe2048" 00:15:57.046 } 00:15:57.046 } 00:15:57.046 ]' 00:15:57.046 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.046 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.046 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.304 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.304 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.304 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.304 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.304 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.304 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:57.304 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:15:57.869 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.127 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.385 00:15:58.385 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.385 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.385 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.643 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.643 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.643 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.643 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.643 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.643 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.643 { 00:15:58.643 "cntlid": 111, 00:15:58.643 "qid": 0, 00:15:58.643 "state": "enabled", 00:15:58.643 "thread": "nvmf_tgt_poll_group_000", 00:15:58.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:15:58.643 "listen_address": { 00:15:58.643 "trtype": "TCP", 00:15:58.643 "adrfam": "IPv4", 00:15:58.643 "traddr": "10.0.0.2", 00:15:58.643 "trsvcid": "4420" 00:15:58.643 }, 00:15:58.643 "peer_address": { 00:15:58.643 "trtype": "TCP", 00:15:58.643 "adrfam": "IPv4", 00:15:58.643 "traddr": "10.0.0.1", 00:15:58.643 "trsvcid": "44146" 00:15:58.643 }, 00:15:58.643 "auth": { 00:15:58.643 "state": "completed", 00:15:58.643 "digest": "sha512", 00:15:58.643 "dhgroup": "ffdhe2048" 00:15:58.643 } 00:15:58.643 } 00:15:58.643 ]' 00:15:58.643 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.643 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.643 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.901 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.901 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.901 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.901 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.901 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.901 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:58.901 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:15:59.466 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.466 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:59.466 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.466 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.466 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.466 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.466 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.466 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:59.466 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:59.725 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:15:59.725 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.725 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:59.725 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:59.725 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:59.725 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.725 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.725 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.725 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.725 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.725 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.725 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.725 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.984 00:15:59.984 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.984 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.984 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.242 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.242 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.242 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.242 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.242 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.242 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.242 { 00:16:00.242 "cntlid": 113, 00:16:00.242 "qid": 0, 00:16:00.242 "state": "enabled", 00:16:00.242 "thread": "nvmf_tgt_poll_group_000", 00:16:00.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:00.242 "listen_address": { 00:16:00.242 "trtype": "TCP", 00:16:00.242 "adrfam": "IPv4", 00:16:00.242 "traddr": "10.0.0.2", 00:16:00.242 "trsvcid": "4420" 00:16:00.242 }, 00:16:00.242 "peer_address": { 00:16:00.242 "trtype": "TCP", 00:16:00.242 "adrfam": "IPv4", 00:16:00.242 "traddr": "10.0.0.1", 00:16:00.242 "trsvcid": "44172" 00:16:00.242 }, 00:16:00.242 "auth": { 00:16:00.242 "state": "completed", 00:16:00.242 "digest": "sha512", 00:16:00.242 "dhgroup": "ffdhe3072" 00:16:00.242 } 00:16:00.242 } 00:16:00.242 ]' 00:16:00.242 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.242 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.242 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.242 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:00.242 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.500 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.500 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.500 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.500 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:16:00.500 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:16:01.066 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.066 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:01.066 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.066 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.066 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.066 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.066 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:01.066 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:01.324 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:01.324 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.324 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:01.324 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:01.324 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:01.324 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.324 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.324 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.324 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.324 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.324 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.324 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.324 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.583 00:16:01.583 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.583 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.583 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.841 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.841 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.841 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.841 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.841 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.841 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.841 { 00:16:01.841 "cntlid": 115, 00:16:01.841 "qid": 0, 00:16:01.841 "state": "enabled", 00:16:01.841 "thread": "nvmf_tgt_poll_group_000", 00:16:01.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:01.841 "listen_address": { 00:16:01.841 "trtype": "TCP", 00:16:01.841 "adrfam": "IPv4", 00:16:01.841 "traddr": "10.0.0.2", 00:16:01.841 "trsvcid": "4420" 00:16:01.841 }, 00:16:01.841 "peer_address": { 00:16:01.841 "trtype": "TCP", 00:16:01.841 "adrfam": "IPv4", 00:16:01.841 "traddr": "10.0.0.1", 00:16:01.841 "trsvcid": "44202" 00:16:01.841 }, 00:16:01.842 "auth": { 00:16:01.842 "state": "completed", 00:16:01.842 "digest": "sha512", 00:16:01.842 "dhgroup": "ffdhe3072" 00:16:01.842 } 00:16:01.842 } 00:16:01.842 ]' 00:16:01.842 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.842 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.842 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.842 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:01.842 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.100 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.100 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.100 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.100 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:16:02.100 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:16:02.665 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.665 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:02.665 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.665 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.665 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.665 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.665 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:02.665 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:02.924 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:02.924 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.924 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:02.924 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:02.924 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:02.924 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.924 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.924 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.924 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.924 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.924 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.924 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.924 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.181 00:16:03.181 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.181 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.181 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.440 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.440 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.440 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.440 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.440 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.440 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.440 { 00:16:03.440 "cntlid": 117, 00:16:03.440 "qid": 0, 00:16:03.440 "state": "enabled", 00:16:03.440 "thread": "nvmf_tgt_poll_group_000", 00:16:03.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:03.440 "listen_address": { 00:16:03.440 "trtype": "TCP", 00:16:03.440 "adrfam": "IPv4", 00:16:03.440 "traddr": "10.0.0.2", 00:16:03.440 "trsvcid": "4420" 00:16:03.440 }, 00:16:03.440 "peer_address": { 00:16:03.440 "trtype": "TCP", 00:16:03.440 "adrfam": "IPv4", 00:16:03.440 "traddr": "10.0.0.1", 00:16:03.440 "trsvcid": "44244" 00:16:03.440 }, 00:16:03.440 "auth": { 00:16:03.440 "state": "completed", 00:16:03.440 "digest": "sha512", 00:16:03.440 "dhgroup": "ffdhe3072" 00:16:03.440 } 00:16:03.440 } 00:16:03.440 ]' 00:16:03.440 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.440 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.440 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.440 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.440 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.440 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.440 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.440 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.698 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:16:03.698 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:16:04.264 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.264 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:04.264 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.264 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.264 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.264 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.264 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:04.264 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:04.522 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:04.522 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.522 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:04.522 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.522 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:04.522 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.522 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:04.522 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.522 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.522 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.522 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:04.522 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.522 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.781 00:16:04.781 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.781 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.781 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.039 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.039 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.039 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.039 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.039 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.039 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.039 { 00:16:05.039 "cntlid": 119, 00:16:05.039 "qid": 0, 00:16:05.039 "state": "enabled", 00:16:05.039 "thread": "nvmf_tgt_poll_group_000", 00:16:05.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:05.039 "listen_address": { 00:16:05.039 "trtype": "TCP", 00:16:05.039 "adrfam": "IPv4", 00:16:05.039 "traddr": "10.0.0.2", 00:16:05.039 "trsvcid": "4420" 00:16:05.039 }, 00:16:05.039 "peer_address": { 00:16:05.039 "trtype": "TCP", 00:16:05.039 "adrfam": "IPv4", 00:16:05.039 "traddr": "10.0.0.1", 00:16:05.039 "trsvcid": "33452" 00:16:05.039 }, 00:16:05.039 "auth": { 00:16:05.039 "state": "completed", 00:16:05.039 "digest": "sha512", 00:16:05.039 "dhgroup": "ffdhe3072" 00:16:05.039 } 00:16:05.039 } 00:16:05.039 ]' 00:16:05.039 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.039 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.039 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.039 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.039 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.039 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.039 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.039 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.297 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:16:05.297 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:16:05.863 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.863 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:05.863 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.863 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.863 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.863 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.863 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.863 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:05.863 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:06.121 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:06.121 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.121 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:06.121 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:06.121 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.121 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.121 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.121 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.121 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.121 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.121 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.121 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.121 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.378 00:16:06.378 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.378 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.378 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.636 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.636 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.636 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.636 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.636 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.636 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.636 { 00:16:06.636 "cntlid": 121, 00:16:06.636 "qid": 0, 00:16:06.636 "state": "enabled", 00:16:06.636 "thread": "nvmf_tgt_poll_group_000", 00:16:06.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:06.636 "listen_address": { 00:16:06.636 "trtype": "TCP", 00:16:06.636 "adrfam": "IPv4", 00:16:06.636 "traddr": "10.0.0.2", 00:16:06.636 "trsvcid": "4420" 00:16:06.636 }, 00:16:06.636 "peer_address": { 00:16:06.636 "trtype": "TCP", 00:16:06.636 "adrfam": "IPv4", 00:16:06.636 "traddr": "10.0.0.1", 00:16:06.636 "trsvcid": "33484" 00:16:06.636 }, 00:16:06.636 "auth": { 00:16:06.636 "state": "completed", 00:16:06.636 "digest": "sha512", 00:16:06.636 "dhgroup": "ffdhe4096" 00:16:06.636 } 00:16:06.636 } 00:16:06.636 ]' 00:16:06.636 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.636 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.636 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.636 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:06.636 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.636 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.636 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.636 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.894 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:16:06.894 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:16:07.459 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.459 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:07.459 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.459 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.460 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.460 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.460 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:07.460 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:07.717 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:07.717 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.717 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:07.717 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:07.717 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:07.717 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.717 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.717 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.717 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.717 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.717 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.717 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.717 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.974 00:16:07.974 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.974 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.974 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.232 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.232 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.232 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.232 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.232 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.232 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.232 { 00:16:08.232 "cntlid": 123, 00:16:08.232 "qid": 0, 00:16:08.232 "state": "enabled", 00:16:08.232 "thread": "nvmf_tgt_poll_group_000", 00:16:08.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:08.232 "listen_address": { 00:16:08.232 "trtype": "TCP", 00:16:08.232 "adrfam": "IPv4", 00:16:08.232 "traddr": "10.0.0.2", 00:16:08.232 "trsvcid": "4420" 00:16:08.232 }, 00:16:08.232 "peer_address": { 00:16:08.232 "trtype": "TCP", 00:16:08.232 "adrfam": "IPv4", 00:16:08.232 "traddr": "10.0.0.1", 00:16:08.232 "trsvcid": "33508" 00:16:08.232 }, 00:16:08.232 "auth": { 00:16:08.232 "state": "completed", 00:16:08.232 "digest": "sha512", 00:16:08.232 "dhgroup": "ffdhe4096" 00:16:08.232 } 00:16:08.232 } 00:16:08.232 ]' 00:16:08.232 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.232 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.232 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.232 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:08.232 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.232 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.232 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.232 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.489 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:16:08.489 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:16:09.055 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.055 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:09.055 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.055 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.055 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.055 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.055 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:09.055 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:09.312 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:09.312 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.313 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.313 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:09.313 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:09.313 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.313 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.313 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.313 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.313 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.313 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.313 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.313 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.570 00:16:09.570 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.570 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.570 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.828 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.828 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.828 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.828 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.828 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.828 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.828 { 00:16:09.828 "cntlid": 125, 00:16:09.828 "qid": 0, 00:16:09.828 "state": "enabled", 00:16:09.828 "thread": "nvmf_tgt_poll_group_000", 00:16:09.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:09.828 "listen_address": { 00:16:09.828 "trtype": "TCP", 00:16:09.828 "adrfam": "IPv4", 00:16:09.828 "traddr": "10.0.0.2", 00:16:09.828 "trsvcid": "4420" 00:16:09.828 }, 00:16:09.828 "peer_address": { 00:16:09.828 "trtype": "TCP", 00:16:09.828 "adrfam": "IPv4", 00:16:09.828 "traddr": "10.0.0.1", 00:16:09.828 "trsvcid": "33546" 00:16:09.828 }, 00:16:09.828 "auth": { 00:16:09.828 "state": "completed", 00:16:09.828 "digest": "sha512", 00:16:09.828 "dhgroup": "ffdhe4096" 00:16:09.828 } 00:16:09.828 } 00:16:09.828 ]' 00:16:09.828 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.828 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.828 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.828 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:09.828 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.828 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.828 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.828 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.086 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:16:10.086 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:16:10.651 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.651 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:10.651 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.651 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.651 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.651 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.651 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:10.651 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:10.909 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:10.909 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.909 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:10.909 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:10.909 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:10.909 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.909 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:10.909 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.909 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.909 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.909 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:10.909 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.909 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.167 00:16:11.167 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.167 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.167 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.425 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.425 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.425 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.425 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.425 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.425 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.425 { 00:16:11.425 "cntlid": 127, 00:16:11.425 "qid": 0, 00:16:11.425 "state": "enabled", 00:16:11.425 "thread": "nvmf_tgt_poll_group_000", 00:16:11.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:11.425 "listen_address": { 00:16:11.425 "trtype": "TCP", 00:16:11.425 "adrfam": "IPv4", 00:16:11.425 "traddr": "10.0.0.2", 00:16:11.425 "trsvcid": "4420" 00:16:11.425 }, 00:16:11.425 "peer_address": { 00:16:11.425 "trtype": "TCP", 00:16:11.425 "adrfam": "IPv4", 00:16:11.425 "traddr": "10.0.0.1", 00:16:11.425 "trsvcid": "33568" 00:16:11.425 }, 00:16:11.425 "auth": { 00:16:11.425 "state": "completed", 00:16:11.425 "digest": "sha512", 00:16:11.425 "dhgroup": "ffdhe4096" 00:16:11.425 } 00:16:11.425 } 00:16:11.425 ]' 00:16:11.425 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.425 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.425 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.425 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.425 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.425 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.425 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.425 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.683 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:16:11.683 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:16:12.248 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.248 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:12.248 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.248 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.248 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.248 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.248 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.248 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:12.248 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:12.506 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:12.506 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.506 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:12.506 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:12.506 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:12.506 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.506 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.506 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.506 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.506 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.506 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.506 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.506 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.763 00:16:12.763 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.763 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.763 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.021 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.021 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.021 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.021 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.021 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.021 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.021 { 00:16:13.021 "cntlid": 129, 00:16:13.021 "qid": 0, 00:16:13.021 "state": "enabled", 00:16:13.021 "thread": "nvmf_tgt_poll_group_000", 00:16:13.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:13.021 "listen_address": { 00:16:13.021 "trtype": "TCP", 00:16:13.021 "adrfam": "IPv4", 00:16:13.021 "traddr": "10.0.0.2", 00:16:13.021 "trsvcid": "4420" 00:16:13.021 }, 00:16:13.021 "peer_address": { 00:16:13.021 "trtype": "TCP", 00:16:13.021 "adrfam": "IPv4", 00:16:13.021 "traddr": "10.0.0.1", 00:16:13.021 "trsvcid": "33584" 00:16:13.021 }, 00:16:13.021 "auth": { 00:16:13.021 "state": "completed", 00:16:13.021 "digest": "sha512", 00:16:13.021 "dhgroup": "ffdhe6144" 00:16:13.021 } 00:16:13.021 } 00:16:13.021 ]' 00:16:13.021 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.021 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.021 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.021 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:13.021 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.021 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.021 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.021 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.279 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:16:13.279 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:16:13.844 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.844 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:13.844 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.844 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.844 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.844 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.844 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:13.844 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:14.102 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:14.102 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.102 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:14.102 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:14.102 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:14.102 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.102 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.102 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.102 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.102 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.102 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.102 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.102 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.359 00:16:14.359 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.359 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.359 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.616 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.616 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.616 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.616 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.616 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.616 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.616 { 00:16:14.616 "cntlid": 131, 00:16:14.616 "qid": 0, 00:16:14.616 "state": "enabled", 00:16:14.616 "thread": "nvmf_tgt_poll_group_000", 00:16:14.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:14.616 "listen_address": { 00:16:14.616 "trtype": "TCP", 00:16:14.616 "adrfam": "IPv4", 00:16:14.616 "traddr": "10.0.0.2", 00:16:14.616 "trsvcid": "4420" 00:16:14.616 }, 00:16:14.616 "peer_address": { 00:16:14.616 "trtype": "TCP", 00:16:14.616 "adrfam": "IPv4", 00:16:14.616 "traddr": "10.0.0.1", 00:16:14.616 "trsvcid": "37198" 00:16:14.616 }, 00:16:14.616 "auth": { 00:16:14.616 "state": "completed", 00:16:14.616 "digest": "sha512", 00:16:14.616 "dhgroup": "ffdhe6144" 00:16:14.616 } 00:16:14.616 } 00:16:14.616 ]' 00:16:14.616 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.616 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.616 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.616 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:14.616 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.874 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.874 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.874 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.874 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:16:14.874 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:16:15.439 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.439 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:15.439 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.439 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.439 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.439 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.439 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:15.439 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:15.697 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:15.697 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.697 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:15.697 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:15.697 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:15.697 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.697 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.697 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.697 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.697 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.697 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.697 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.697 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.955 00:16:16.213 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.213 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.213 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.213 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.213 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.213 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.213 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.213 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.213 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.213 { 00:16:16.213 "cntlid": 133, 00:16:16.213 "qid": 0, 00:16:16.213 "state": "enabled", 00:16:16.213 "thread": "nvmf_tgt_poll_group_000", 00:16:16.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:16.213 "listen_address": { 00:16:16.213 "trtype": "TCP", 00:16:16.213 "adrfam": "IPv4", 00:16:16.213 "traddr": "10.0.0.2", 00:16:16.213 "trsvcid": "4420" 00:16:16.213 }, 00:16:16.213 "peer_address": { 00:16:16.213 "trtype": "TCP", 00:16:16.213 "adrfam": "IPv4", 00:16:16.213 "traddr": "10.0.0.1", 00:16:16.213 "trsvcid": "37208" 00:16:16.213 }, 00:16:16.213 "auth": { 00:16:16.213 "state": "completed", 00:16:16.213 "digest": "sha512", 00:16:16.213 "dhgroup": "ffdhe6144" 00:16:16.213 } 00:16:16.213 } 00:16:16.213 ]' 00:16:16.213 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.213 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.213 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.471 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:16.471 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.471 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.471 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.471 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.729 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:16:16.729 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:16:17.294 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.294 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:17.295 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:17.860 00:16:17.860 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.860 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.860 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.860 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.860 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.860 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.860 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.860 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.860 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.860 { 00:16:17.860 "cntlid": 135, 00:16:17.860 "qid": 0, 00:16:17.860 "state": "enabled", 00:16:17.860 "thread": "nvmf_tgt_poll_group_000", 00:16:17.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:17.860 "listen_address": { 00:16:17.860 "trtype": "TCP", 00:16:17.860 "adrfam": "IPv4", 00:16:17.860 "traddr": "10.0.0.2", 00:16:17.860 "trsvcid": "4420" 00:16:17.860 }, 00:16:17.860 "peer_address": { 00:16:17.860 "trtype": "TCP", 00:16:17.860 "adrfam": "IPv4", 00:16:17.860 "traddr": "10.0.0.1", 00:16:17.860 "trsvcid": "37236" 00:16:17.860 }, 00:16:17.860 "auth": { 00:16:17.860 "state": "completed", 00:16:17.860 "digest": "sha512", 00:16:17.860 "dhgroup": "ffdhe6144" 00:16:17.860 } 00:16:17.860 } 00:16:17.860 ]' 00:16:17.860 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.117 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.118 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.118 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:18.118 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.118 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.118 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.118 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.374 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:16:18.374 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.938 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.939 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.939 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.195 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.195 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.452 00:16:19.452 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.452 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.452 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.708 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.708 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.708 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.708 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.708 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.708 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.708 { 00:16:19.708 "cntlid": 137, 00:16:19.708 "qid": 0, 00:16:19.708 "state": "enabled", 00:16:19.708 "thread": "nvmf_tgt_poll_group_000", 00:16:19.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:19.708 "listen_address": { 00:16:19.708 "trtype": "TCP", 00:16:19.708 "adrfam": "IPv4", 00:16:19.708 "traddr": "10.0.0.2", 00:16:19.708 "trsvcid": "4420" 00:16:19.708 }, 00:16:19.708 "peer_address": { 00:16:19.708 "trtype": "TCP", 00:16:19.708 "adrfam": "IPv4", 00:16:19.708 "traddr": "10.0.0.1", 00:16:19.708 "trsvcid": "37246" 00:16:19.708 }, 00:16:19.708 "auth": { 00:16:19.708 "state": "completed", 00:16:19.708 "digest": "sha512", 00:16:19.708 "dhgroup": "ffdhe8192" 00:16:19.708 } 00:16:19.708 } 00:16:19.708 ]' 00:16:19.708 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.708 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.708 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.964 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.964 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.964 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.964 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.964 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.964 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:16:19.964 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:16:20.529 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.529 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:20.529 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.529 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.529 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.529 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.529 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:20.529 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:20.786 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:20.786 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.786 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.786 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:20.786 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:20.786 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.786 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.786 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.786 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.786 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.786 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.786 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.786 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.357 00:16:21.357 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.357 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.357 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.613 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.614 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.614 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.614 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.614 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.614 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.614 { 00:16:21.614 "cntlid": 139, 00:16:21.614 "qid": 0, 00:16:21.614 "state": "enabled", 00:16:21.614 "thread": "nvmf_tgt_poll_group_000", 00:16:21.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:21.614 "listen_address": { 00:16:21.614 "trtype": "TCP", 00:16:21.614 "adrfam": "IPv4", 00:16:21.614 "traddr": "10.0.0.2", 00:16:21.614 "trsvcid": "4420" 00:16:21.614 }, 00:16:21.614 "peer_address": { 00:16:21.614 "trtype": "TCP", 00:16:21.614 "adrfam": "IPv4", 00:16:21.614 "traddr": "10.0.0.1", 00:16:21.614 "trsvcid": "37264" 00:16:21.614 }, 00:16:21.614 "auth": { 00:16:21.614 "state": "completed", 00:16:21.614 "digest": "sha512", 00:16:21.614 "dhgroup": "ffdhe8192" 00:16:21.614 } 00:16:21.614 } 00:16:21.614 ]' 00:16:21.614 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.614 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.614 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.614 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.614 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.614 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.614 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.614 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.871 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:16:21.871 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: --dhchap-ctrl-secret DHHC-1:02:ZGE4MmNlMDJjM2ExNjA2NGZhYTk2ZDAwYTMyMzYzYjU3OTc3ZDdiNzYyOTNhNTZmVPOMJw==: 00:16:22.434 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.434 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:22.434 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.434 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.434 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.434 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.434 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:22.434 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:22.691 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:22.691 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.691 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.691 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:22.691 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:22.691 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.691 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.691 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.691 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.691 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.691 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.691 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.691 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.253 00:16:23.253 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.253 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.253 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.253 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.253 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.253 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.253 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.253 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.253 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.253 { 00:16:23.253 "cntlid": 141, 00:16:23.253 "qid": 0, 00:16:23.253 "state": "enabled", 00:16:23.253 "thread": "nvmf_tgt_poll_group_000", 00:16:23.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:23.253 "listen_address": { 00:16:23.253 "trtype": "TCP", 00:16:23.253 "adrfam": "IPv4", 00:16:23.253 "traddr": "10.0.0.2", 00:16:23.253 "trsvcid": "4420" 00:16:23.253 }, 00:16:23.253 "peer_address": { 00:16:23.253 "trtype": "TCP", 00:16:23.253 "adrfam": "IPv4", 00:16:23.253 "traddr": "10.0.0.1", 00:16:23.253 "trsvcid": "37298" 00:16:23.253 }, 00:16:23.253 "auth": { 00:16:23.253 "state": "completed", 00:16:23.253 "digest": "sha512", 00:16:23.253 "dhgroup": "ffdhe8192" 00:16:23.253 } 00:16:23.253 } 00:16:23.253 ]' 00:16:23.253 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.510 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.510 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.510 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.510 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.510 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.510 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.510 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.767 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:16:23.767 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:01:YjgwN2RiMGM2MjJmMDMxZmY1YmZmYWIwODkxOTA5NDaS1WR7: 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.330 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.587 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.587 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.587 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.844 00:16:24.844 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.844 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.844 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.101 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.101 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.101 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.101 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.101 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.101 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.101 { 00:16:25.101 "cntlid": 143, 00:16:25.101 "qid": 0, 00:16:25.101 "state": "enabled", 00:16:25.101 "thread": "nvmf_tgt_poll_group_000", 00:16:25.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:25.101 "listen_address": { 00:16:25.101 "trtype": "TCP", 00:16:25.101 "adrfam": "IPv4", 00:16:25.101 "traddr": "10.0.0.2", 00:16:25.101 "trsvcid": "4420" 00:16:25.101 }, 00:16:25.101 "peer_address": { 00:16:25.101 "trtype": "TCP", 00:16:25.101 "adrfam": "IPv4", 00:16:25.101 "traddr": "10.0.0.1", 00:16:25.101 "trsvcid": "39654" 00:16:25.101 }, 00:16:25.101 "auth": { 00:16:25.101 "state": "completed", 00:16:25.101 "digest": "sha512", 00:16:25.101 "dhgroup": "ffdhe8192" 00:16:25.101 } 00:16:25.101 } 00:16:25.101 ]' 00:16:25.101 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.101 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.101 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.358 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:25.358 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.358 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.358 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.358 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.358 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:16:25.358 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:16:25.922 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.922 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:25.922 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.922 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.922 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.922 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:25.922 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:25.922 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:25.922 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:25.922 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:25.922 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:26.179 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:26.179 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.179 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.179 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:26.179 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.179 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.179 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.179 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.179 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.179 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.179 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.179 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.179 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.743 00:16:26.743 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.743 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.743 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.999 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.000 { 00:16:27.000 "cntlid": 145, 00:16:27.000 "qid": 0, 00:16:27.000 "state": "enabled", 00:16:27.000 "thread": "nvmf_tgt_poll_group_000", 00:16:27.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:27.000 "listen_address": { 00:16:27.000 "trtype": "TCP", 00:16:27.000 "adrfam": "IPv4", 00:16:27.000 "traddr": "10.0.0.2", 00:16:27.000 "trsvcid": "4420" 00:16:27.000 }, 00:16:27.000 "peer_address": { 00:16:27.000 "trtype": "TCP", 00:16:27.000 "adrfam": "IPv4", 00:16:27.000 "traddr": "10.0.0.1", 00:16:27.000 "trsvcid": "39686" 00:16:27.000 }, 00:16:27.000 "auth": { 00:16:27.000 "state": "completed", 00:16:27.000 "digest": "sha512", 00:16:27.000 "dhgroup": "ffdhe8192" 00:16:27.000 } 00:16:27.000 } 00:16:27.000 ]' 00:16:27.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:27.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.000 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.256 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:16:27.257 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjUyNjk1YzI0YTVkMmI3MDRiOThiYTRhYTQxOTg2NjdkMjg3MDA2Y2UyOTQ0N2IzNUmWLA==: --dhchap-ctrl-secret DHHC-1:03:MzFhMzBjNGFkMDk5MGNhNmI5YzM4MDQwNzQ4YmFkN2MwZjM1YTAyNmIxZWIzY2NmZjBmMTMzZWY3MmYxZmU4NPEQ2RA=: 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:27.822 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:28.385 request: 00:16:28.385 { 00:16:28.386 "name": "nvme0", 00:16:28.386 "trtype": "tcp", 00:16:28.386 "traddr": "10.0.0.2", 00:16:28.386 "adrfam": "ipv4", 00:16:28.386 "trsvcid": "4420", 00:16:28.386 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:28.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:28.386 "prchk_reftag": false, 00:16:28.386 "prchk_guard": false, 00:16:28.386 "hdgst": false, 00:16:28.386 "ddgst": false, 00:16:28.386 "dhchap_key": "key2", 00:16:28.386 "allow_unrecognized_csi": false, 00:16:28.386 "method": "bdev_nvme_attach_controller", 00:16:28.386 "req_id": 1 00:16:28.386 } 00:16:28.386 Got JSON-RPC error response 00:16:28.386 response: 00:16:28.386 { 00:16:28.386 "code": -5, 00:16:28.386 "message": "Input/output error" 00:16:28.386 } 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:28.386 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:28.643 request: 00:16:28.643 { 00:16:28.643 "name": "nvme0", 00:16:28.643 "trtype": "tcp", 00:16:28.643 "traddr": "10.0.0.2", 00:16:28.643 "adrfam": "ipv4", 00:16:28.643 "trsvcid": "4420", 00:16:28.643 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:28.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:28.643 "prchk_reftag": false, 00:16:28.643 "prchk_guard": false, 00:16:28.643 "hdgst": false, 00:16:28.643 "ddgst": false, 00:16:28.643 "dhchap_key": "key1", 00:16:28.643 "dhchap_ctrlr_key": "ckey2", 00:16:28.643 "allow_unrecognized_csi": false, 00:16:28.643 "method": "bdev_nvme_attach_controller", 00:16:28.643 "req_id": 1 00:16:28.643 } 00:16:28.643 Got JSON-RPC error response 00:16:28.643 response: 00:16:28.643 { 00:16:28.643 "code": -5, 00:16:28.643 "message": "Input/output error" 00:16:28.643 } 00:16:28.643 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:28.643 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:28.643 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:28.643 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:28.643 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:28.643 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.643 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.899 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.899 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:16:28.899 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.899 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.899 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.899 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.899 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:28.899 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.899 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:28.899 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:28.899 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:28.900 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:28.900 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.900 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.900 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.156 request: 00:16:29.156 { 00:16:29.156 "name": "nvme0", 00:16:29.156 "trtype": "tcp", 00:16:29.156 "traddr": "10.0.0.2", 00:16:29.156 "adrfam": "ipv4", 00:16:29.156 "trsvcid": "4420", 00:16:29.156 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:29.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:29.156 "prchk_reftag": false, 00:16:29.156 "prchk_guard": false, 00:16:29.156 "hdgst": false, 00:16:29.156 "ddgst": false, 00:16:29.156 "dhchap_key": "key1", 00:16:29.156 "dhchap_ctrlr_key": "ckey1", 00:16:29.156 "allow_unrecognized_csi": false, 00:16:29.156 "method": "bdev_nvme_attach_controller", 00:16:29.156 "req_id": 1 00:16:29.156 } 00:16:29.156 Got JSON-RPC error response 00:16:29.156 response: 00:16:29.156 { 00:16:29.156 "code": -5, 00:16:29.156 "message": "Input/output error" 00:16:29.156 } 00:16:29.156 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:29.156 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:29.156 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:29.156 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:29.156 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:29.156 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.156 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.156 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.156 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3304051 00:16:29.156 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3304051 ']' 00:16:29.156 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3304051 00:16:29.156 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:29.156 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.156 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3304051 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3304051' 00:16:29.414 killing process with pid 3304051 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3304051 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3304051 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3325744 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3325744 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3325744 ']' 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.414 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.672 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.672 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:29.672 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:29.672 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:29.672 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.672 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.672 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:29.672 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3325744 00:16:29.672 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3325744 ']' 00:16:29.672 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.672 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.672 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.672 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.672 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.930 null0 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xQg 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Jed ]] 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jed 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.eeW 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.hQb ]] 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hQb 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Yt8 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.z8P ]] 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.z8P 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.930 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.L0h 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.188 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.754 nvme0n1 00:16:30.754 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.754 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.754 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.011 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.011 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.011 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.011 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.011 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.011 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.011 { 00:16:31.011 "cntlid": 1, 00:16:31.011 "qid": 0, 00:16:31.011 "state": "enabled", 00:16:31.011 "thread": "nvmf_tgt_poll_group_000", 00:16:31.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:31.011 "listen_address": { 00:16:31.011 "trtype": "TCP", 00:16:31.011 "adrfam": "IPv4", 00:16:31.011 "traddr": "10.0.0.2", 00:16:31.011 "trsvcid": "4420" 00:16:31.011 }, 00:16:31.011 "peer_address": { 00:16:31.011 "trtype": "TCP", 00:16:31.011 "adrfam": "IPv4", 00:16:31.011 "traddr": "10.0.0.1", 00:16:31.011 "trsvcid": "39742" 00:16:31.011 }, 00:16:31.011 "auth": { 00:16:31.011 "state": "completed", 00:16:31.011 "digest": "sha512", 00:16:31.011 "dhgroup": "ffdhe8192" 00:16:31.011 } 00:16:31.011 } 00:16:31.011 ]' 00:16:31.011 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.011 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.011 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.011 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:31.011 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.268 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.268 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.268 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.268 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:16:31.268 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:16:31.865 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.865 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:31.865 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.865 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.865 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.865 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:16:31.865 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.865 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.865 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.865 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:31.865 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:32.138 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:32.138 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:32.138 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:32.138 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:32.138 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.138 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:32.138 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.138 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.138 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.138 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.433 request: 00:16:32.433 { 00:16:32.433 "name": "nvme0", 00:16:32.433 "trtype": "tcp", 00:16:32.433 "traddr": "10.0.0.2", 00:16:32.433 "adrfam": "ipv4", 00:16:32.433 "trsvcid": "4420", 00:16:32.433 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:32.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:32.433 "prchk_reftag": false, 00:16:32.433 "prchk_guard": false, 00:16:32.433 "hdgst": false, 00:16:32.433 "ddgst": false, 00:16:32.433 "dhchap_key": "key3", 00:16:32.433 "allow_unrecognized_csi": false, 00:16:32.433 "method": "bdev_nvme_attach_controller", 00:16:32.433 "req_id": 1 00:16:32.433 } 00:16:32.433 Got JSON-RPC error response 00:16:32.433 response: 00:16:32.433 { 00:16:32.433 "code": -5, 00:16:32.433 "message": "Input/output error" 00:16:32.433 } 00:16:32.433 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:32.433 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:32.433 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:32.433 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:32.433 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:32.433 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:32.433 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:32.433 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:32.433 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:32.433 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:32.433 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:32.433 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:32.433 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.433 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:32.692 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.693 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.693 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.693 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.693 request: 00:16:32.693 { 00:16:32.693 "name": "nvme0", 00:16:32.693 "trtype": "tcp", 00:16:32.693 "traddr": "10.0.0.2", 00:16:32.693 "adrfam": "ipv4", 00:16:32.693 "trsvcid": "4420", 00:16:32.693 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:32.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:32.693 "prchk_reftag": false, 00:16:32.693 "prchk_guard": false, 00:16:32.693 "hdgst": false, 00:16:32.693 "ddgst": false, 00:16:32.693 "dhchap_key": "key3", 00:16:32.693 "allow_unrecognized_csi": false, 00:16:32.693 "method": "bdev_nvme_attach_controller", 00:16:32.693 "req_id": 1 00:16:32.693 } 00:16:32.693 Got JSON-RPC error response 00:16:32.693 response: 00:16:32.693 { 00:16:32.693 "code": -5, 00:16:32.693 "message": "Input/output error" 00:16:32.693 } 00:16:32.693 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:32.693 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:32.693 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:32.693 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:32.693 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:32.693 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:32.693 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:32.693 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:32.693 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:32.693 09:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:32.950 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:33.208 request: 00:16:33.208 { 00:16:33.208 "name": "nvme0", 00:16:33.208 "trtype": "tcp", 00:16:33.208 "traddr": "10.0.0.2", 00:16:33.208 "adrfam": "ipv4", 00:16:33.208 "trsvcid": "4420", 00:16:33.208 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:33.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:33.208 "prchk_reftag": false, 00:16:33.208 "prchk_guard": false, 00:16:33.208 "hdgst": false, 00:16:33.208 "ddgst": false, 00:16:33.208 "dhchap_key": "key0", 00:16:33.208 "dhchap_ctrlr_key": "key1", 00:16:33.208 "allow_unrecognized_csi": false, 00:16:33.208 "method": "bdev_nvme_attach_controller", 00:16:33.208 "req_id": 1 00:16:33.208 } 00:16:33.208 Got JSON-RPC error response 00:16:33.208 response: 00:16:33.208 { 00:16:33.208 "code": -5, 00:16:33.208 "message": "Input/output error" 00:16:33.208 } 00:16:33.208 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:33.208 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:33.208 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:33.208 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:33.208 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:33.208 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:33.208 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:33.465 nvme0n1 00:16:33.465 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:33.465 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:33.465 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.723 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.723 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.723 09:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.980 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:16:33.980 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.980 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.980 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.980 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:33.980 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:33.980 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:34.911 nvme0n1 00:16:34.911 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:34.911 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:34.911 09:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.912 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.912 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:34.912 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.912 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.912 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.912 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:34.912 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:34.912 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.168 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.168 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:16:35.168 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: --dhchap-ctrl-secret DHHC-1:03:MzdhMzdlMmJiOTdhNjY1NTJmZWRmNTBiMmU0Y2JhZGFhMDg3MDI1ODZmNDIwZjlmMDI1YjY4ODkwYTdkOGZlM3dCOVk=: 00:16:35.732 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:35.732 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:35.732 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:35.732 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:35.732 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:35.732 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:35.732 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:35.732 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.732 09:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.732 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:35.732 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:35.732 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:35.732 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:35.732 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.732 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:35.732 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.732 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:35.732 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:35.732 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:36.295 request: 00:16:36.295 { 00:16:36.295 "name": "nvme0", 00:16:36.295 "trtype": "tcp", 00:16:36.295 "traddr": "10.0.0.2", 00:16:36.295 "adrfam": "ipv4", 00:16:36.295 "trsvcid": "4420", 00:16:36.295 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:36.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:16:36.296 "prchk_reftag": false, 00:16:36.296 "prchk_guard": false, 00:16:36.296 "hdgst": false, 00:16:36.296 "ddgst": false, 00:16:36.296 "dhchap_key": "key1", 00:16:36.296 "allow_unrecognized_csi": false, 00:16:36.296 "method": "bdev_nvme_attach_controller", 00:16:36.296 "req_id": 1 00:16:36.296 } 00:16:36.296 Got JSON-RPC error response 00:16:36.296 response: 00:16:36.296 { 00:16:36.296 "code": -5, 00:16:36.296 "message": "Input/output error" 00:16:36.296 } 00:16:36.296 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:36.296 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:36.296 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:36.296 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:36.296 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:36.296 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:36.296 09:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:37.227 nvme0n1 00:16:37.227 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:37.227 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:37.227 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.227 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.227 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.227 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.483 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:37.483 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.483 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.483 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.483 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:37.483 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:37.483 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:37.740 nvme0n1 00:16:37.740 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:37.740 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:37.740 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: '' 2s 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: ]] 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YzhlOWYyOTU5YWI1ZjdlNjZkNzc0Y2IzMmRiNzUwNWSNIx31: 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:37.997 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: 2s 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: ]] 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Yjk4MjNhZTZmMjlmYzY1NGRhZWFlZTM3YWMyMGM0YTQ5OWU1MTIyNTBkOThjNTVhF7sIHw==: 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:40.522 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:42.418 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:16:42.418 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:42.418 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:42.418 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:42.418 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:42.418 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:42.418 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:42.418 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.418 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:42.418 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.418 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.418 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.418 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:42.418 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:42.418 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:42.982 nvme0n1 00:16:42.982 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:42.982 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.982 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.982 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.982 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:42.982 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:43.546 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:16:43.546 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:16:43.546 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.803 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.803 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:43.803 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.803 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.803 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.803 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:16:43.803 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:16:43.803 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:16:43.803 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:16:43.803 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.061 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.061 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:44.061 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.061 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.061 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.061 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:44.061 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:44.061 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:44.061 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:44.061 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.061 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:44.061 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.061 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:44.061 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:44.626 request: 00:16:44.626 { 00:16:44.626 "name": "nvme0", 00:16:44.626 "dhchap_key": "key1", 00:16:44.626 "dhchap_ctrlr_key": "key3", 00:16:44.626 "method": "bdev_nvme_set_keys", 00:16:44.626 "req_id": 1 00:16:44.626 } 00:16:44.626 Got JSON-RPC error response 00:16:44.626 response: 00:16:44.626 { 00:16:44.626 "code": -13, 00:16:44.626 "message": "Permission denied" 00:16:44.626 } 00:16:44.626 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:44.626 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:44.626 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:44.626 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:44.626 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:44.626 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:44.626 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.626 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:16:44.626 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:16:45.998 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:45.998 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:45.998 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.998 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:16:45.998 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:45.998 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.998 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.998 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.998 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:45.998 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:45.998 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:46.563 nvme0n1 00:16:46.563 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:46.563 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.563 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.563 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.563 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:46.563 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:46.563 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:46.563 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:46.563 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.563 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:46.563 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.563 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:46.563 09:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:47.128 request: 00:16:47.128 { 00:16:47.128 "name": "nvme0", 00:16:47.128 "dhchap_key": "key2", 00:16:47.128 "dhchap_ctrlr_key": "key0", 00:16:47.128 "method": "bdev_nvme_set_keys", 00:16:47.128 "req_id": 1 00:16:47.128 } 00:16:47.128 Got JSON-RPC error response 00:16:47.128 response: 00:16:47.128 { 00:16:47.128 "code": -13, 00:16:47.128 "message": "Permission denied" 00:16:47.128 } 00:16:47.128 09:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:47.128 09:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:47.128 09:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:47.128 09:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:47.128 09:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:47.128 09:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:47.128 09:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.386 09:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:16:47.386 09:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:16:48.318 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:48.318 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:48.318 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.624 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:16:49.624 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:16:49.624 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:16:49.624 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3304074 00:16:49.624 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3304074 ']' 00:16:49.624 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3304074 00:16:49.624 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:49.624 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.624 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3304074 00:16:49.624 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:49.624 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:49.624 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3304074' 00:16:49.624 killing process with pid 3304074 00:16:49.624 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3304074 00:16:49.624 09:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3304074 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:49.624 rmmod nvme_tcp 00:16:49.624 rmmod nvme_fabrics 00:16:49.624 rmmod nvme_keyring 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3325744 ']' 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3325744 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3325744 ']' 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3325744 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3325744 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3325744' 00:16:49.624 killing process with pid 3325744 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3325744 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3325744 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.624 09:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.xQg /tmp/spdk.key-sha256.eeW /tmp/spdk.key-sha384.Yt8 /tmp/spdk.key-sha512.L0h /tmp/spdk.key-sha512.Jed /tmp/spdk.key-sha384.hQb /tmp/spdk.key-sha256.z8P '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:16:51.525 00:16:51.525 real 2m29.634s 00:16:51.525 user 5m45.977s 00:16:51.525 sys 0m23.426s 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.525 ************************************ 00:16:51.525 END TEST nvmf_auth_target 00:16:51.525 ************************************ 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:51.525 ************************************ 00:16:51.525 START TEST nvmf_bdevio_no_huge 00:16:51.525 ************************************ 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:51.525 * Looking for test storage... 00:16:51.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:51.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.525 --rc genhtml_branch_coverage=1 00:16:51.525 --rc genhtml_function_coverage=1 00:16:51.525 --rc genhtml_legend=1 00:16:51.525 --rc geninfo_all_blocks=1 00:16:51.525 --rc geninfo_unexecuted_blocks=1 00:16:51.525 00:16:51.525 ' 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:51.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.525 --rc genhtml_branch_coverage=1 00:16:51.525 --rc genhtml_function_coverage=1 00:16:51.525 --rc genhtml_legend=1 00:16:51.525 --rc geninfo_all_blocks=1 00:16:51.525 --rc geninfo_unexecuted_blocks=1 00:16:51.525 00:16:51.525 ' 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:51.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.525 --rc genhtml_branch_coverage=1 00:16:51.525 --rc genhtml_function_coverage=1 00:16:51.525 --rc genhtml_legend=1 00:16:51.525 --rc geninfo_all_blocks=1 00:16:51.525 --rc geninfo_unexecuted_blocks=1 00:16:51.525 00:16:51.525 ' 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:51.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.525 --rc genhtml_branch_coverage=1 00:16:51.525 --rc genhtml_function_coverage=1 00:16:51.525 --rc genhtml_legend=1 00:16:51.525 --rc geninfo_all_blocks=1 00:16:51.525 --rc geninfo_unexecuted_blocks=1 00:16:51.525 00:16:51.525 ' 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.525 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:51.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:16:51.526 09:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:56.787 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:56.788 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:56.788 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:56.788 Found net devices under 0000:af:00.0: cvl_0_0 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:56.788 Found net devices under 0000:af:00.1: cvl_0_1 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:56.788 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:56.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:16:56.788 00:16:56.788 --- 10.0.0.2 ping statistics --- 00:16:56.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.788 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:56.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:16:56.788 00:16:56.788 --- 10.0.0.1 ping statistics --- 00:16:56.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.788 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3332467 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3332467 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3332467 ']' 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.788 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:57.046 [2024-12-13 09:28:09.171479] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:16:57.047 [2024-12-13 09:28:09.171531] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:57.047 [2024-12-13 09:28:09.245400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.047 [2024-12-13 09:28:09.291390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.047 [2024-12-13 09:28:09.291423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.047 [2024-12-13 09:28:09.291431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.047 [2024-12-13 09:28:09.291438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.047 [2024-12-13 09:28:09.291443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.047 [2024-12-13 09:28:09.292518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:57.047 [2024-12-13 09:28:09.292648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:16:57.047 [2024-12-13 09:28:09.292650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.047 [2024-12-13 09:28:09.292628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:16:57.047 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.047 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:16:57.047 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:57.047 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:57.047 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:57.304 [2024-12-13 09:28:09.442897] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:57.304 Malloc0 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:57.304 [2024-12-13 09:28:09.487222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:57.304 { 00:16:57.304 "params": { 00:16:57.304 "name": "Nvme$subsystem", 00:16:57.304 "trtype": "$TEST_TRANSPORT", 00:16:57.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:57.304 "adrfam": "ipv4", 00:16:57.304 "trsvcid": "$NVMF_PORT", 00:16:57.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:57.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:57.304 "hdgst": ${hdgst:-false}, 00:16:57.304 "ddgst": ${ddgst:-false} 00:16:57.304 }, 00:16:57.304 "method": "bdev_nvme_attach_controller" 00:16:57.304 } 00:16:57.304 EOF 00:16:57.304 )") 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:16:57.304 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:57.304 "params": { 00:16:57.304 "name": "Nvme1", 00:16:57.304 "trtype": "tcp", 00:16:57.304 "traddr": "10.0.0.2", 00:16:57.304 "adrfam": "ipv4", 00:16:57.304 "trsvcid": "4420", 00:16:57.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.304 "hdgst": false, 00:16:57.304 "ddgst": false 00:16:57.304 }, 00:16:57.304 "method": "bdev_nvme_attach_controller" 00:16:57.304 }' 00:16:57.304 [2024-12-13 09:28:09.537494] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:16:57.304 [2024-12-13 09:28:09.537539] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3332494 ] 00:16:57.304 [2024-12-13 09:28:09.607508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:57.304 [2024-12-13 09:28:09.656080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.304 [2024-12-13 09:28:09.656105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.304 [2024-12-13 09:28:09.656108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.561 I/O targets: 00:16:57.561 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:57.561 00:16:57.561 00:16:57.561 CUnit - A unit testing framework for C - Version 2.1-3 00:16:57.561 http://cunit.sourceforge.net/ 00:16:57.561 00:16:57.561 00:16:57.561 Suite: bdevio tests on: Nvme1n1 00:16:57.561 Test: blockdev write read block ...passed 00:16:57.561 Test: blockdev write zeroes read block ...passed 00:16:57.818 Test: blockdev write zeroes read no split ...passed 00:16:57.818 Test: blockdev write zeroes read split ...passed 00:16:57.818 Test: blockdev write zeroes read split partial ...passed 00:16:57.818 Test: blockdev reset ...[2024-12-13 09:28:10.030320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:57.818 [2024-12-13 09:28:10.030394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df9d30 (9): Bad file descriptor 00:16:57.818 [2024-12-13 09:28:10.085368] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:16:57.818 passed 00:16:57.818 Test: blockdev write read 8 blocks ...passed 00:16:57.818 Test: blockdev write read size > 128k ...passed 00:16:57.818 Test: blockdev write read invalid size ...passed 00:16:57.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:57.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:57.818 Test: blockdev write read max offset ...passed 00:16:58.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:58.075 Test: blockdev writev readv 8 blocks ...passed 00:16:58.075 Test: blockdev writev readv 30 x 1block ...passed 00:16:58.075 Test: blockdev writev readv block ...passed 00:16:58.075 Test: blockdev writev readv size > 128k ...passed 00:16:58.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:58.075 Test: blockdev comparev and writev ...[2024-12-13 09:28:10.258063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.075 [2024-12-13 09:28:10.258093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.075 [2024-12-13 09:28:10.258107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.075 [2024-12-13 09:28:10.258115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.075 [2024-12-13 09:28:10.258349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.075 [2024-12-13 09:28:10.258360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:58.075 [2024-12-13 09:28:10.258371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.075 [2024-12-13 09:28:10.258378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:58.075 [2024-12-13 09:28:10.258618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.075 [2024-12-13 09:28:10.258629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:58.075 [2024-12-13 09:28:10.258641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.075 [2024-12-13 09:28:10.258648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:58.075 [2024-12-13 09:28:10.258880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.075 [2024-12-13 09:28:10.258891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:58.075 [2024-12-13 09:28:10.258902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.075 [2024-12-13 09:28:10.258909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:58.075 passed 00:16:58.075 Test: blockdev nvme passthru rw ...passed 00:16:58.075 Test: blockdev nvme passthru vendor specific ...[2024-12-13 09:28:10.341790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:58.075 [2024-12-13 09:28:10.341809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:58.075 [2024-12-13 09:28:10.341926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:58.075 [2024-12-13 09:28:10.341936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:58.075 [2024-12-13 09:28:10.342040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:58.075 [2024-12-13 09:28:10.342050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:58.075 [2024-12-13 09:28:10.342156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:58.075 [2024-12-13 09:28:10.342170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:58.075 passed 00:16:58.075 Test: blockdev nvme admin passthru ...passed 00:16:58.075 Test: blockdev copy ...passed 00:16:58.075 00:16:58.075 Run Summary: Type Total Ran Passed Failed Inactive 00:16:58.075 suites 1 1 n/a 0 0 00:16:58.075 tests 23 23 23 0 0 00:16:58.075 asserts 152 152 152 0 n/a 00:16:58.075 00:16:58.075 Elapsed time = 1.157 seconds 00:16:58.332 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:58.332 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.332 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:58.332 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.332 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:58.332 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:58.332 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:58.332 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:16:58.332 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:58.332 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:16:58.332 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:58.332 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:58.332 rmmod nvme_tcp 00:16:58.332 rmmod nvme_fabrics 00:16:58.588 rmmod nvme_keyring 00:16:58.588 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:58.588 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:16:58.588 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:16:58.588 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3332467 ']' 00:16:58.588 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3332467 00:16:58.588 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3332467 ']' 00:16:58.588 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3332467 00:16:58.588 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:16:58.589 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.589 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3332467 00:16:58.589 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:16:58.589 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:16:58.589 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3332467' 00:16:58.589 killing process with pid 3332467 00:16:58.589 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3332467 00:16:58.589 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3332467 00:16:58.846 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:58.846 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:58.846 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:58.846 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:16:58.846 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:16:58.846 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:58.846 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:16:58.846 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:58.846 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:58.846 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.846 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.846 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.378 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:01.378 00:17:01.378 real 0m9.623s 00:17:01.378 user 0m10.443s 00:17:01.378 sys 0m4.902s 00:17:01.378 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.378 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:01.378 ************************************ 00:17:01.378 END TEST nvmf_bdevio_no_huge 00:17:01.378 ************************************ 00:17:01.378 09:28:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:01.379 ************************************ 00:17:01.379 START TEST nvmf_tls 00:17:01.379 ************************************ 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:01.379 * Looking for test storage... 00:17:01.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:01.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.379 --rc genhtml_branch_coverage=1 00:17:01.379 --rc genhtml_function_coverage=1 00:17:01.379 --rc genhtml_legend=1 00:17:01.379 --rc geninfo_all_blocks=1 00:17:01.379 --rc geninfo_unexecuted_blocks=1 00:17:01.379 00:17:01.379 ' 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:01.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.379 --rc genhtml_branch_coverage=1 00:17:01.379 --rc genhtml_function_coverage=1 00:17:01.379 --rc genhtml_legend=1 00:17:01.379 --rc geninfo_all_blocks=1 00:17:01.379 --rc geninfo_unexecuted_blocks=1 00:17:01.379 00:17:01.379 ' 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:01.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.379 --rc genhtml_branch_coverage=1 00:17:01.379 --rc genhtml_function_coverage=1 00:17:01.379 --rc genhtml_legend=1 00:17:01.379 --rc geninfo_all_blocks=1 00:17:01.379 --rc geninfo_unexecuted_blocks=1 00:17:01.379 00:17:01.379 ' 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:01.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.379 --rc genhtml_branch_coverage=1 00:17:01.379 --rc genhtml_function_coverage=1 00:17:01.379 --rc genhtml_legend=1 00:17:01.379 --rc geninfo_all_blocks=1 00:17:01.379 --rc geninfo_unexecuted_blocks=1 00:17:01.379 00:17:01.379 ' 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:01.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:01.379 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:01.380 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:01.380 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:01.380 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:01.380 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:01.380 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.380 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:01.380 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:01.380 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:01.380 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.380 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.380 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.380 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:01.380 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:01.380 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:01.380 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:06.646 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:06.646 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:06.646 Found net devices under 0000:af:00.0: cvl_0_0 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:06.646 Found net devices under 0000:af:00.1: cvl_0_1 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:06.646 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.647 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.904 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.904 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.904 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:06.904 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.904 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.904 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:06.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:17:06.905 00:17:06.905 --- 10.0.0.2 ping statistics --- 00:17:06.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.905 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:17:06.905 00:17:06.905 --- 10.0.0.1 ping statistics --- 00:17:06.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.905 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3336191 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3336191 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3336191 ']' 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.905 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.905 [2024-12-13 09:28:19.246203] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:06.905 [2024-12-13 09:28:19.246244] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.163 [2024-12-13 09:28:19.313605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.163 [2024-12-13 09:28:19.353392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.163 [2024-12-13 09:28:19.353426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.163 [2024-12-13 09:28:19.353434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.163 [2024-12-13 09:28:19.353440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.163 [2024-12-13 09:28:19.353445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.163 [2024-12-13 09:28:19.353945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.163 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.163 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:07.163 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:07.163 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:07.163 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:07.163 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.163 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:07.163 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:07.421 true 00:17:07.421 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:07.421 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:07.678 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:07.678 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:07.678 09:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:07.678 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:07.678 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:07.936 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:07.936 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:07.936 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:08.193 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:08.193 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:08.451 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:08.451 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:08.451 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:08.451 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:08.451 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:08.451 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:08.451 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:08.709 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:08.709 09:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:08.966 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:08.966 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:08.966 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:08.966 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:08.967 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.hj91Afzfyd 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.ZjAg0Q93RN 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.hj91Afzfyd 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.ZjAg0Q93RN 00:17:09.225 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:09.483 09:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:09.741 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.hj91Afzfyd 00:17:09.741 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hj91Afzfyd 00:17:09.741 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:10.000 [2024-12-13 09:28:22.201570] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.000 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:10.258 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:10.258 [2024-12-13 09:28:22.574516] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:10.258 [2024-12-13 09:28:22.574743] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.258 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:10.516 malloc0 00:17:10.516 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:10.774 09:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hj91Afzfyd 00:17:10.774 09:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:11.032 09:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.hj91Afzfyd 00:17:23.228 Initializing NVMe Controllers 00:17:23.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:23.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:23.228 Initialization complete. Launching workers. 00:17:23.228 ======================================================== 00:17:23.228 Latency(us) 00:17:23.228 Device Information : IOPS MiB/s Average min max 00:17:23.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16921.76 66.10 3782.20 882.85 6396.92 00:17:23.228 ======================================================== 00:17:23.228 Total : 16921.76 66.10 3782.20 882.85 6396.92 00:17:23.228 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hj91Afzfyd 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hj91Afzfyd 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3338684 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3338684 /var/tmp/bdevperf.sock 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3338684 ']' 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:23.228 [2024-12-13 09:28:33.452252] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:23.228 [2024-12-13 09:28:33.452303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3338684 ] 00:17:23.228 [2024-12-13 09:28:33.509195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.228 [2024-12-13 09:28:33.550542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hj91Afzfyd 00:17:23.228 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:23.228 [2024-12-13 09:28:33.991140] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:23.228 TLSTESTn1 00:17:23.228 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:23.228 Running I/O for 10 seconds... 00:17:24.162 5610.00 IOPS, 21.91 MiB/s [2024-12-13T08:28:37.461Z] 5744.50 IOPS, 22.44 MiB/s [2024-12-13T08:28:38.395Z] 5761.00 IOPS, 22.50 MiB/s [2024-12-13T08:28:39.329Z] 5800.25 IOPS, 22.66 MiB/s [2024-12-13T08:28:40.262Z] 5800.60 IOPS, 22.66 MiB/s [2024-12-13T08:28:41.195Z] 5770.33 IOPS, 22.54 MiB/s [2024-12-13T08:28:42.641Z] 5762.86 IOPS, 22.51 MiB/s [2024-12-13T08:28:43.286Z] 5738.75 IOPS, 22.42 MiB/s [2024-12-13T08:28:44.219Z] 5742.44 IOPS, 22.43 MiB/s [2024-12-13T08:28:44.220Z] 5765.00 IOPS, 22.52 MiB/s 00:17:31.854 Latency(us) 00:17:31.854 [2024-12-13T08:28:44.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.854 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:31.854 Verification LBA range: start 0x0 length 0x2000 00:17:31.854 TLSTESTn1 : 10.01 5770.55 22.54 0.00 0.00 22148.74 4649.94 23592.96 00:17:31.854 [2024-12-13T08:28:44.220Z] =================================================================================================================== 00:17:31.854 [2024-12-13T08:28:44.220Z] Total : 5770.55 22.54 0.00 0.00 22148.74 4649.94 23592.96 00:17:31.854 { 00:17:31.854 "results": [ 00:17:31.854 { 00:17:31.854 "job": "TLSTESTn1", 00:17:31.854 "core_mask": "0x4", 00:17:31.854 "workload": "verify", 00:17:31.854 "status": "finished", 00:17:31.854 "verify_range": { 00:17:31.854 "start": 0, 00:17:31.854 "length": 8192 00:17:31.854 }, 00:17:31.854 "queue_depth": 128, 00:17:31.854 "io_size": 4096, 00:17:31.854 "runtime": 10.012382, 00:17:31.854 "iops": 5770.554898924152, 00:17:31.854 "mibps": 22.54123007392247, 00:17:31.854 "io_failed": 0, 00:17:31.854 "io_timeout": 0, 00:17:31.854 "avg_latency_us": 22148.742991320487, 00:17:31.854 "min_latency_us": 4649.935238095238, 00:17:31.854 "max_latency_us": 23592.96 00:17:31.854 } 00:17:31.854 ], 00:17:31.854 "core_count": 1 00:17:31.854 } 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3338684 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3338684 ']' 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3338684 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3338684 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3338684' 00:17:32.112 killing process with pid 3338684 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3338684 00:17:32.112 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.112 00:17:32.112 Latency(us) 00:17:32.112 [2024-12-13T08:28:44.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.112 [2024-12-13T08:28:44.478Z] =================================================================================================================== 00:17:32.112 [2024-12-13T08:28:44.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3338684 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZjAg0Q93RN 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZjAg0Q93RN 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZjAg0Q93RN 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZjAg0Q93RN 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3340471 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3340471 /var/tmp/bdevperf.sock 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3340471 ']' 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.112 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.370 [2024-12-13 09:28:44.497453] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:32.370 [2024-12-13 09:28:44.497501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340471 ] 00:17:32.370 [2024-12-13 09:28:44.554830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.370 [2024-12-13 09:28:44.591074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.370 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.370 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:32.370 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZjAg0Q93RN 00:17:32.628 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:32.892 [2024-12-13 09:28:45.026922] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:32.892 [2024-12-13 09:28:45.033699] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:32.892 [2024-12-13 09:28:45.034227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10da410 (107): Transport endpoint is not connected 00:17:32.892 [2024-12-13 09:28:45.035221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10da410 (9): Bad file descriptor 00:17:32.892 [2024-12-13 09:28:45.036222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:32.892 [2024-12-13 09:28:45.036232] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:32.892 [2024-12-13 09:28:45.036239] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:32.892 [2024-12-13 09:28:45.036248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:32.893 request: 00:17:32.893 { 00:17:32.893 "name": "TLSTEST", 00:17:32.893 "trtype": "tcp", 00:17:32.893 "traddr": "10.0.0.2", 00:17:32.893 "adrfam": "ipv4", 00:17:32.893 "trsvcid": "4420", 00:17:32.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:32.893 "prchk_reftag": false, 00:17:32.893 "prchk_guard": false, 00:17:32.893 "hdgst": false, 00:17:32.893 "ddgst": false, 00:17:32.893 "psk": "key0", 00:17:32.893 "allow_unrecognized_csi": false, 00:17:32.893 "method": "bdev_nvme_attach_controller", 00:17:32.893 "req_id": 1 00:17:32.893 } 00:17:32.893 Got JSON-RPC error response 00:17:32.893 response: 00:17:32.893 { 00:17:32.893 "code": -5, 00:17:32.893 "message": "Input/output error" 00:17:32.893 } 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3340471 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3340471 ']' 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3340471 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3340471 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3340471' 00:17:32.893 killing process with pid 3340471 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3340471 00:17:32.893 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.893 00:17:32.893 Latency(us) 00:17:32.893 [2024-12-13T08:28:45.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.893 [2024-12-13T08:28:45.259Z] =================================================================================================================== 00:17:32.893 [2024-12-13T08:28:45.259Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3340471 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hj91Afzfyd 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hj91Afzfyd 00:17:32.893 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hj91Afzfyd 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hj91Afzfyd 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3340494 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3340494 /var/tmp/bdevperf.sock 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3340494 ']' 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.152 [2024-12-13 09:28:45.306548] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:33.152 [2024-12-13 09:28:45.306598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340494 ] 00:17:33.152 [2024-12-13 09:28:45.363757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.152 [2024-12-13 09:28:45.402073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:33.152 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hj91Afzfyd 00:17:33.409 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:33.667 [2024-12-13 09:28:45.857718] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:33.667 [2024-12-13 09:28:45.865700] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:33.667 [2024-12-13 09:28:45.865723] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:33.667 [2024-12-13 09:28:45.865748] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:33.667 [2024-12-13 09:28:45.866013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e83410 (107): Transport endpoint is not connected 00:17:33.667 [2024-12-13 09:28:45.867006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e83410 (9): Bad file descriptor 00:17:33.668 [2024-12-13 09:28:45.868007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:33.668 [2024-12-13 09:28:45.868021] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:33.668 [2024-12-13 09:28:45.868028] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:33.668 [2024-12-13 09:28:45.868035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:33.668 request: 00:17:33.668 { 00:17:33.668 "name": "TLSTEST", 00:17:33.668 "trtype": "tcp", 00:17:33.668 "traddr": "10.0.0.2", 00:17:33.668 "adrfam": "ipv4", 00:17:33.668 "trsvcid": "4420", 00:17:33.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.668 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:33.668 "prchk_reftag": false, 00:17:33.668 "prchk_guard": false, 00:17:33.668 "hdgst": false, 00:17:33.668 "ddgst": false, 00:17:33.668 "psk": "key0", 00:17:33.668 "allow_unrecognized_csi": false, 00:17:33.668 "method": "bdev_nvme_attach_controller", 00:17:33.668 "req_id": 1 00:17:33.668 } 00:17:33.668 Got JSON-RPC error response 00:17:33.668 response: 00:17:33.668 { 00:17:33.668 "code": -5, 00:17:33.668 "message": "Input/output error" 00:17:33.668 } 00:17:33.668 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3340494 00:17:33.668 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3340494 ']' 00:17:33.668 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3340494 00:17:33.668 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:33.668 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.668 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3340494 00:17:33.668 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:33.668 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:33.668 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3340494' 00:17:33.668 killing process with pid 3340494 00:17:33.668 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3340494 00:17:33.668 Received shutdown signal, test time was about 10.000000 seconds 00:17:33.668 00:17:33.668 Latency(us) 00:17:33.668 [2024-12-13T08:28:46.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.668 [2024-12-13T08:28:46.034Z] =================================================================================================================== 00:17:33.668 [2024-12-13T08:28:46.034Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:33.668 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3340494 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hj91Afzfyd 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hj91Afzfyd 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hj91Afzfyd 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hj91Afzfyd 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3340717 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3340717 /var/tmp/bdevperf.sock 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3340717 ']' 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.926 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.926 [2024-12-13 09:28:46.141195] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:33.926 [2024-12-13 09:28:46.141241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340717 ] 00:17:33.926 [2024-12-13 09:28:46.198340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.926 [2024-12-13 09:28:46.234301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.184 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.184 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:34.184 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hj91Afzfyd 00:17:34.184 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:34.442 [2024-12-13 09:28:46.674426] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:34.442 [2024-12-13 09:28:46.678984] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:34.442 [2024-12-13 09:28:46.679006] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:34.442 [2024-12-13 09:28:46.679029] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:34.442 [2024-12-13 09:28:46.679693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b23410 (107): Transport endpoint is not connected 00:17:34.442 [2024-12-13 09:28:46.680685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b23410 (9): Bad file descriptor 00:17:34.442 [2024-12-13 09:28:46.681686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:17:34.443 [2024-12-13 09:28:46.681696] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:34.443 [2024-12-13 09:28:46.681703] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:34.443 [2024-12-13 09:28:46.681711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:17:34.443 request: 00:17:34.443 { 00:17:34.443 "name": "TLSTEST", 00:17:34.443 "trtype": "tcp", 00:17:34.443 "traddr": "10.0.0.2", 00:17:34.443 "adrfam": "ipv4", 00:17:34.443 "trsvcid": "4420", 00:17:34.443 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:34.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:34.443 "prchk_reftag": false, 00:17:34.443 "prchk_guard": false, 00:17:34.443 "hdgst": false, 00:17:34.443 "ddgst": false, 00:17:34.443 "psk": "key0", 00:17:34.443 "allow_unrecognized_csi": false, 00:17:34.443 "method": "bdev_nvme_attach_controller", 00:17:34.443 "req_id": 1 00:17:34.443 } 00:17:34.443 Got JSON-RPC error response 00:17:34.443 response: 00:17:34.443 { 00:17:34.443 "code": -5, 00:17:34.443 "message": "Input/output error" 00:17:34.443 } 00:17:34.443 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3340717 00:17:34.443 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3340717 ']' 00:17:34.443 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3340717 00:17:34.443 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:34.443 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.443 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3340717 00:17:34.443 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:34.443 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:34.443 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3340717' 00:17:34.443 killing process with pid 3340717 00:17:34.443 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3340717 00:17:34.443 Received shutdown signal, test time was about 10.000000 seconds 00:17:34.443 00:17:34.443 Latency(us) 00:17:34.443 [2024-12-13T08:28:46.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.443 [2024-12-13T08:28:46.809Z] =================================================================================================================== 00:17:34.443 [2024-12-13T08:28:46.809Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:34.443 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3340717 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3340818 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3340818 /var/tmp/bdevperf.sock 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3340818 ']' 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:34.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.700 09:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.700 [2024-12-13 09:28:46.949511] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:34.700 [2024-12-13 09:28:46.949570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340818 ] 00:17:34.700 [2024-12-13 09:28:47.014526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.700 [2024-12-13 09:28:47.056194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.958 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.958 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:34.958 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:34.958 [2024-12-13 09:28:47.311883] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:34.958 [2024-12-13 09:28:47.311914] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:34.958 request: 00:17:34.958 { 00:17:34.958 "name": "key0", 00:17:34.958 "path": "", 00:17:34.958 "method": "keyring_file_add_key", 00:17:34.958 "req_id": 1 00:17:34.958 } 00:17:34.958 Got JSON-RPC error response 00:17:34.958 response: 00:17:34.958 { 00:17:34.958 "code": -1, 00:17:34.958 "message": "Operation not permitted" 00:17:34.958 } 00:17:35.217 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:35.217 [2024-12-13 09:28:47.500466] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:35.217 [2024-12-13 09:28:47.500503] bdev_nvme.c:6755:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:35.217 request: 00:17:35.217 { 00:17:35.217 "name": "TLSTEST", 00:17:35.217 "trtype": "tcp", 00:17:35.217 "traddr": "10.0.0.2", 00:17:35.217 "adrfam": "ipv4", 00:17:35.217 "trsvcid": "4420", 00:17:35.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.217 "prchk_reftag": false, 00:17:35.217 "prchk_guard": false, 00:17:35.217 "hdgst": false, 00:17:35.217 "ddgst": false, 00:17:35.217 "psk": "key0", 00:17:35.217 "allow_unrecognized_csi": false, 00:17:35.217 "method": "bdev_nvme_attach_controller", 00:17:35.217 "req_id": 1 00:17:35.217 } 00:17:35.217 Got JSON-RPC error response 00:17:35.217 response: 00:17:35.217 { 00:17:35.217 "code": -126, 00:17:35.217 "message": "Required key not available" 00:17:35.217 } 00:17:35.217 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3340818 00:17:35.217 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3340818 ']' 00:17:35.217 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3340818 00:17:35.217 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:35.217 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.217 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3340818 00:17:35.217 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:35.217 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:35.217 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3340818' 00:17:35.217 killing process with pid 3340818 00:17:35.217 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3340818 00:17:35.217 Received shutdown signal, test time was about 10.000000 seconds 00:17:35.217 00:17:35.217 Latency(us) 00:17:35.217 [2024-12-13T08:28:47.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.217 [2024-12-13T08:28:47.583Z] =================================================================================================================== 00:17:35.217 [2024-12-13T08:28:47.583Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:35.217 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3340818 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3336191 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3336191 ']' 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3336191 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3336191 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3336191' 00:17:35.476 killing process with pid 3336191 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3336191 00:17:35.476 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3336191 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.O7Sx0Vvrdq 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.O7Sx0Vvrdq 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3340978 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3340978 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3340978 ']' 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.735 09:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.735 [2024-12-13 09:28:48.023010] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:35.735 [2024-12-13 09:28:48.023058] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.735 [2024-12-13 09:28:48.088526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.994 [2024-12-13 09:28:48.123929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.994 [2024-12-13 09:28:48.123961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.994 [2024-12-13 09:28:48.123969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.994 [2024-12-13 09:28:48.123975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.994 [2024-12-13 09:28:48.123980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.994 [2024-12-13 09:28:48.124491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.994 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.994 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:35.994 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:35.994 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:35.994 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.994 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.994 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.O7Sx0Vvrdq 00:17:35.994 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.O7Sx0Vvrdq 00:17:35.994 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:36.252 [2024-12-13 09:28:48.424488] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.252 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:36.511 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:36.511 [2024-12-13 09:28:48.797465] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:36.511 [2024-12-13 09:28:48.797672] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.511 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:36.769 malloc0 00:17:36.769 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:37.028 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.O7Sx0Vvrdq 00:17:37.028 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O7Sx0Vvrdq 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.O7Sx0Vvrdq 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3341241 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3341241 /var/tmp/bdevperf.sock 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3341241 ']' 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.286 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.286 [2024-12-13 09:28:49.592407] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:37.286 [2024-12-13 09:28:49.592467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3341241 ] 00:17:37.286 [2024-12-13 09:28:49.650147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.545 [2024-12-13 09:28:49.690904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.545 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.545 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:37.545 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.O7Sx0Vvrdq 00:17:37.804 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:37.804 [2024-12-13 09:28:50.139964] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.062 TLSTESTn1 00:17:38.062 09:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:38.062 Running I/O for 10 seconds... 00:17:40.371 5442.00 IOPS, 21.26 MiB/s [2024-12-13T08:28:53.671Z] 5492.50 IOPS, 21.46 MiB/s [2024-12-13T08:28:54.606Z] 5503.00 IOPS, 21.50 MiB/s [2024-12-13T08:28:55.542Z] 5444.00 IOPS, 21.27 MiB/s [2024-12-13T08:28:56.477Z] 5454.80 IOPS, 21.31 MiB/s [2024-12-13T08:28:57.412Z] 5481.83 IOPS, 21.41 MiB/s [2024-12-13T08:28:58.347Z] 5482.86 IOPS, 21.42 MiB/s [2024-12-13T08:28:59.723Z] 5510.62 IOPS, 21.53 MiB/s [2024-12-13T08:29:00.657Z] 5515.11 IOPS, 21.54 MiB/s [2024-12-13T08:29:00.658Z] 5507.60 IOPS, 21.51 MiB/s 00:17:48.292 Latency(us) 00:17:48.292 [2024-12-13T08:29:00.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.292 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:48.292 Verification LBA range: start 0x0 length 0x2000 00:17:48.292 TLSTESTn1 : 10.01 5512.20 21.53 0.00 0.00 23186.07 5991.86 23218.47 00:17:48.292 [2024-12-13T08:29:00.658Z] =================================================================================================================== 00:17:48.292 [2024-12-13T08:29:00.658Z] Total : 5512.20 21.53 0.00 0.00 23186.07 5991.86 23218.47 00:17:48.292 { 00:17:48.292 "results": [ 00:17:48.292 { 00:17:48.292 "job": "TLSTESTn1", 00:17:48.292 "core_mask": "0x4", 00:17:48.292 "workload": "verify", 00:17:48.292 "status": "finished", 00:17:48.292 "verify_range": { 00:17:48.292 "start": 0, 00:17:48.292 "length": 8192 00:17:48.292 }, 00:17:48.292 "queue_depth": 128, 00:17:48.292 "io_size": 4096, 00:17:48.292 "runtime": 10.014687, 00:17:48.292 "iops": 5512.2042256537825, 00:17:48.292 "mibps": 21.532047756460088, 00:17:48.292 "io_failed": 0, 00:17:48.292 "io_timeout": 0, 00:17:48.292 "avg_latency_us": 23186.073487862548, 00:17:48.292 "min_latency_us": 5991.862857142857, 00:17:48.292 "max_latency_us": 23218.46857142857 00:17:48.292 } 00:17:48.292 ], 00:17:48.292 "core_count": 1 00:17:48.292 } 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3341241 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3341241 ']' 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3341241 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3341241 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3341241' 00:17:48.292 killing process with pid 3341241 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3341241 00:17:48.292 Received shutdown signal, test time was about 10.000000 seconds 00:17:48.292 00:17:48.292 Latency(us) 00:17:48.292 [2024-12-13T08:29:00.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.292 [2024-12-13T08:29:00.658Z] =================================================================================================================== 00:17:48.292 [2024-12-13T08:29:00.658Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3341241 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.O7Sx0Vvrdq 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O7Sx0Vvrdq 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O7Sx0Vvrdq 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O7Sx0Vvrdq 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.O7Sx0Vvrdq 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3343012 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3343012 /var/tmp/bdevperf.sock 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3343012 ']' 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.292 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.292 [2024-12-13 09:29:00.647567] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:48.292 [2024-12-13 09:29:00.647615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343012 ] 00:17:48.551 [2024-12-13 09:29:00.705722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.551 [2024-12-13 09:29:00.744929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.551 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.551 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:48.551 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.O7Sx0Vvrdq 00:17:48.808 [2024-12-13 09:29:01.016833] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.O7Sx0Vvrdq': 0100666 00:17:48.808 [2024-12-13 09:29:01.016859] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:48.808 request: 00:17:48.808 { 00:17:48.808 "name": "key0", 00:17:48.808 "path": "/tmp/tmp.O7Sx0Vvrdq", 00:17:48.808 "method": "keyring_file_add_key", 00:17:48.808 "req_id": 1 00:17:48.808 } 00:17:48.808 Got JSON-RPC error response 00:17:48.808 response: 00:17:48.808 { 00:17:48.808 "code": -1, 00:17:48.808 "message": "Operation not permitted" 00:17:48.808 } 00:17:48.808 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:49.067 [2024-12-13 09:29:01.201392] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.067 [2024-12-13 09:29:01.201427] bdev_nvme.c:6755:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:49.067 request: 00:17:49.067 { 00:17:49.067 "name": "TLSTEST", 00:17:49.067 "trtype": "tcp", 00:17:49.067 "traddr": "10.0.0.2", 00:17:49.067 "adrfam": "ipv4", 00:17:49.067 "trsvcid": "4420", 00:17:49.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.067 "prchk_reftag": false, 00:17:49.067 "prchk_guard": false, 00:17:49.067 "hdgst": false, 00:17:49.067 "ddgst": false, 00:17:49.067 "psk": "key0", 00:17:49.067 "allow_unrecognized_csi": false, 00:17:49.067 "method": "bdev_nvme_attach_controller", 00:17:49.067 "req_id": 1 00:17:49.067 } 00:17:49.067 Got JSON-RPC error response 00:17:49.067 response: 00:17:49.067 { 00:17:49.067 "code": -126, 00:17:49.067 "message": "Required key not available" 00:17:49.067 } 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3343012 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3343012 ']' 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3343012 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3343012 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3343012' 00:17:49.067 killing process with pid 3343012 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3343012 00:17:49.067 Received shutdown signal, test time was about 10.000000 seconds 00:17:49.067 00:17:49.067 Latency(us) 00:17:49.067 [2024-12-13T08:29:01.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.067 [2024-12-13T08:29:01.433Z] =================================================================================================================== 00:17:49.067 [2024-12-13T08:29:01.433Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3343012 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3340978 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3340978 ']' 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3340978 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.067 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3340978 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3340978' 00:17:49.325 killing process with pid 3340978 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3340978 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3340978 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3343249 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3343249 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3343249 ']' 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.325 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.326 [2024-12-13 09:29:01.680756] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:49.326 [2024-12-13 09:29:01.680804] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.584 [2024-12-13 09:29:01.746693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.584 [2024-12-13 09:29:01.785240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.584 [2024-12-13 09:29:01.785273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.584 [2024-12-13 09:29:01.785280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.584 [2024-12-13 09:29:01.785286] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.584 [2024-12-13 09:29:01.785291] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.584 [2024-12-13 09:29:01.785792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.O7Sx0Vvrdq 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.O7Sx0Vvrdq 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.O7Sx0Vvrdq 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.O7Sx0Vvrdq 00:17:49.584 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:49.842 [2024-12-13 09:29:02.089505] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.842 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:50.100 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:50.359 [2024-12-13 09:29:02.478486] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:50.359 [2024-12-13 09:29:02.478678] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.359 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:50.359 malloc0 00:17:50.359 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:50.617 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.O7Sx0Vvrdq 00:17:50.876 [2024-12-13 09:29:03.039848] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.O7Sx0Vvrdq': 0100666 00:17:50.876 [2024-12-13 09:29:03.039875] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:50.876 request: 00:17:50.876 { 00:17:50.876 "name": "key0", 00:17:50.876 "path": "/tmp/tmp.O7Sx0Vvrdq", 00:17:50.876 "method": "keyring_file_add_key", 00:17:50.876 "req_id": 1 00:17:50.876 } 00:17:50.876 Got JSON-RPC error response 00:17:50.876 response: 00:17:50.876 { 00:17:50.877 "code": -1, 00:17:50.877 "message": "Operation not permitted" 00:17:50.877 } 00:17:50.877 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:50.877 [2024-12-13 09:29:03.224342] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:17:50.877 [2024-12-13 09:29:03.224373] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:50.877 request: 00:17:50.877 { 00:17:50.877 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.877 "host": "nqn.2016-06.io.spdk:host1", 00:17:50.877 "psk": "key0", 00:17:50.877 "method": "nvmf_subsystem_add_host", 00:17:50.877 "req_id": 1 00:17:50.877 } 00:17:50.877 Got JSON-RPC error response 00:17:50.877 response: 00:17:50.877 { 00:17:50.877 "code": -32603, 00:17:50.877 "message": "Internal error" 00:17:50.877 } 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3343249 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3343249 ']' 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3343249 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3343249 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3343249' 00:17:51.135 killing process with pid 3343249 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3343249 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3343249 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.O7Sx0Vvrdq 00:17:51.135 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:17:51.136 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:51.136 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:51.136 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.136 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:51.136 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3343522 00:17:51.136 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3343522 00:17:51.136 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3343522 ']' 00:17:51.136 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.136 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.136 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.136 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.136 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.394 [2024-12-13 09:29:03.505680] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:51.394 [2024-12-13 09:29:03.505724] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.394 [2024-12-13 09:29:03.573014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.394 [2024-12-13 09:29:03.611838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.394 [2024-12-13 09:29:03.611871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.394 [2024-12-13 09:29:03.611878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.394 [2024-12-13 09:29:03.611884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.394 [2024-12-13 09:29:03.611890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.394 [2024-12-13 09:29:03.612389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.394 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.394 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:51.394 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:51.395 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:51.395 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.395 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.395 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.O7Sx0Vvrdq 00:17:51.395 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.O7Sx0Vvrdq 00:17:51.395 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:51.652 [2024-12-13 09:29:03.912381] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.652 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:51.910 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:52.168 [2024-12-13 09:29:04.277301] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:52.168 [2024-12-13 09:29:04.277501] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.168 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:52.168 malloc0 00:17:52.168 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:52.427 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.O7Sx0Vvrdq 00:17:52.685 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:52.685 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3343886 00:17:52.685 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:52.685 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:52.685 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3343886 /var/tmp/bdevperf.sock 00:17:52.685 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3343886 ']' 00:17:52.685 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.685 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.685 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.685 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.685 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.954 [2024-12-13 09:29:05.092924] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:52.954 [2024-12-13 09:29:05.092975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343886 ] 00:17:52.954 [2024-12-13 09:29:05.154029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.954 [2024-12-13 09:29:05.194945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.954 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.954 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:52.954 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.O7Sx0Vvrdq 00:17:53.214 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:53.472 [2024-12-13 09:29:05.631328] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:53.472 TLSTESTn1 00:17:53.472 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:53.731 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:17:53.731 "subsystems": [ 00:17:53.731 { 00:17:53.731 "subsystem": "keyring", 00:17:53.731 "config": [ 00:17:53.731 { 00:17:53.731 "method": "keyring_file_add_key", 00:17:53.731 "params": { 00:17:53.731 "name": "key0", 00:17:53.731 "path": "/tmp/tmp.O7Sx0Vvrdq" 00:17:53.731 } 00:17:53.731 } 00:17:53.731 ] 00:17:53.731 }, 00:17:53.731 { 00:17:53.731 "subsystem": "iobuf", 00:17:53.731 "config": [ 00:17:53.731 { 00:17:53.731 "method": "iobuf_set_options", 00:17:53.731 "params": { 00:17:53.731 "small_pool_count": 8192, 00:17:53.731 "large_pool_count": 1024, 00:17:53.731 "small_bufsize": 8192, 00:17:53.731 "large_bufsize": 135168, 00:17:53.731 "enable_numa": false 00:17:53.731 } 00:17:53.731 } 00:17:53.731 ] 00:17:53.731 }, 00:17:53.731 { 00:17:53.731 "subsystem": "sock", 00:17:53.731 "config": [ 00:17:53.731 { 00:17:53.731 "method": "sock_set_default_impl", 00:17:53.731 "params": { 00:17:53.731 "impl_name": "posix" 00:17:53.731 } 00:17:53.731 }, 00:17:53.731 { 00:17:53.731 "method": "sock_impl_set_options", 00:17:53.731 "params": { 00:17:53.731 "impl_name": "ssl", 00:17:53.731 "recv_buf_size": 4096, 00:17:53.731 "send_buf_size": 4096, 00:17:53.731 "enable_recv_pipe": true, 00:17:53.731 "enable_quickack": false, 00:17:53.731 "enable_placement_id": 0, 00:17:53.731 "enable_zerocopy_send_server": true, 00:17:53.731 "enable_zerocopy_send_client": false, 00:17:53.731 "zerocopy_threshold": 0, 00:17:53.731 "tls_version": 0, 00:17:53.731 "enable_ktls": false 00:17:53.731 } 00:17:53.731 }, 00:17:53.731 { 00:17:53.731 "method": "sock_impl_set_options", 00:17:53.731 "params": { 00:17:53.731 "impl_name": "posix", 00:17:53.731 "recv_buf_size": 2097152, 00:17:53.731 "send_buf_size": 2097152, 00:17:53.731 "enable_recv_pipe": true, 00:17:53.731 "enable_quickack": false, 00:17:53.731 "enable_placement_id": 0, 00:17:53.731 "enable_zerocopy_send_server": true, 00:17:53.731 "enable_zerocopy_send_client": false, 00:17:53.731 "zerocopy_threshold": 0, 00:17:53.731 "tls_version": 0, 00:17:53.731 "enable_ktls": false 00:17:53.731 } 00:17:53.731 } 00:17:53.731 ] 00:17:53.731 }, 00:17:53.731 { 00:17:53.731 "subsystem": "vmd", 00:17:53.731 "config": [] 00:17:53.731 }, 00:17:53.731 { 00:17:53.731 "subsystem": "accel", 00:17:53.731 "config": [ 00:17:53.731 { 00:17:53.731 "method": "accel_set_options", 00:17:53.731 "params": { 00:17:53.731 "small_cache_size": 128, 00:17:53.731 "large_cache_size": 16, 00:17:53.731 "task_count": 2048, 00:17:53.731 "sequence_count": 2048, 00:17:53.731 "buf_count": 2048 00:17:53.731 } 00:17:53.731 } 00:17:53.731 ] 00:17:53.731 }, 00:17:53.731 { 00:17:53.731 "subsystem": "bdev", 00:17:53.731 "config": [ 00:17:53.731 { 00:17:53.731 "method": "bdev_set_options", 00:17:53.731 "params": { 00:17:53.731 "bdev_io_pool_size": 65535, 00:17:53.731 "bdev_io_cache_size": 256, 00:17:53.731 "bdev_auto_examine": true, 00:17:53.731 "iobuf_small_cache_size": 128, 00:17:53.731 "iobuf_large_cache_size": 16 00:17:53.731 } 00:17:53.731 }, 00:17:53.731 { 00:17:53.731 "method": "bdev_raid_set_options", 00:17:53.731 "params": { 00:17:53.731 "process_window_size_kb": 1024, 00:17:53.731 "process_max_bandwidth_mb_sec": 0 00:17:53.731 } 00:17:53.731 }, 00:17:53.731 { 00:17:53.731 "method": "bdev_iscsi_set_options", 00:17:53.731 "params": { 00:17:53.731 "timeout_sec": 30 00:17:53.731 } 00:17:53.731 }, 00:17:53.731 { 00:17:53.731 "method": "bdev_nvme_set_options", 00:17:53.731 "params": { 00:17:53.731 "action_on_timeout": "none", 00:17:53.731 "timeout_us": 0, 00:17:53.731 "timeout_admin_us": 0, 00:17:53.731 "keep_alive_timeout_ms": 10000, 00:17:53.731 "arbitration_burst": 0, 00:17:53.731 "low_priority_weight": 0, 00:17:53.731 "medium_priority_weight": 0, 00:17:53.731 "high_priority_weight": 0, 00:17:53.731 "nvme_adminq_poll_period_us": 10000, 00:17:53.731 "nvme_ioq_poll_period_us": 0, 00:17:53.731 "io_queue_requests": 0, 00:17:53.731 "delay_cmd_submit": true, 00:17:53.731 "transport_retry_count": 4, 00:17:53.731 "bdev_retry_count": 3, 00:17:53.731 "transport_ack_timeout": 0, 00:17:53.731 "ctrlr_loss_timeout_sec": 0, 00:17:53.731 "reconnect_delay_sec": 0, 00:17:53.731 "fast_io_fail_timeout_sec": 0, 00:17:53.731 "disable_auto_failback": false, 00:17:53.731 "generate_uuids": false, 00:17:53.731 "transport_tos": 0, 00:17:53.731 "nvme_error_stat": false, 00:17:53.731 "rdma_srq_size": 0, 00:17:53.731 "io_path_stat": false, 00:17:53.731 "allow_accel_sequence": false, 00:17:53.731 "rdma_max_cq_size": 0, 00:17:53.731 "rdma_cm_event_timeout_ms": 0, 00:17:53.731 "dhchap_digests": [ 00:17:53.731 "sha256", 00:17:53.731 "sha384", 00:17:53.731 "sha512" 00:17:53.731 ], 00:17:53.731 "dhchap_dhgroups": [ 00:17:53.731 "null", 00:17:53.731 "ffdhe2048", 00:17:53.731 "ffdhe3072", 00:17:53.731 "ffdhe4096", 00:17:53.731 "ffdhe6144", 00:17:53.731 "ffdhe8192" 00:17:53.731 ] 00:17:53.731 } 00:17:53.731 }, 00:17:53.731 { 00:17:53.731 "method": "bdev_nvme_set_hotplug", 00:17:53.731 "params": { 00:17:53.731 "period_us": 100000, 00:17:53.731 "enable": false 00:17:53.731 } 00:17:53.731 }, 00:17:53.731 { 00:17:53.731 "method": "bdev_malloc_create", 00:17:53.731 "params": { 00:17:53.731 "name": "malloc0", 00:17:53.731 "num_blocks": 8192, 00:17:53.731 "block_size": 4096, 00:17:53.731 "physical_block_size": 4096, 00:17:53.731 "uuid": "7bd0cf86-bcc7-4a1c-8575-f4d5f3d29f85", 00:17:53.731 "optimal_io_boundary": 0, 00:17:53.731 "md_size": 0, 00:17:53.731 "dif_type": 0, 00:17:53.731 "dif_is_head_of_md": false, 00:17:53.731 "dif_pi_format": 0 00:17:53.731 } 00:17:53.731 }, 00:17:53.731 { 00:17:53.731 "method": "bdev_wait_for_examine" 00:17:53.731 } 00:17:53.731 ] 00:17:53.731 }, 00:17:53.731 { 00:17:53.731 "subsystem": "nbd", 00:17:53.731 "config": [] 00:17:53.731 }, 00:17:53.731 { 00:17:53.731 "subsystem": "scheduler", 00:17:53.731 "config": [ 00:17:53.731 { 00:17:53.731 "method": "framework_set_scheduler", 00:17:53.731 "params": { 00:17:53.731 "name": "static" 00:17:53.731 } 00:17:53.731 } 00:17:53.732 ] 00:17:53.732 }, 00:17:53.732 { 00:17:53.732 "subsystem": "nvmf", 00:17:53.732 "config": [ 00:17:53.732 { 00:17:53.732 "method": "nvmf_set_config", 00:17:53.732 "params": { 00:17:53.732 "discovery_filter": "match_any", 00:17:53.732 "admin_cmd_passthru": { 00:17:53.732 "identify_ctrlr": false 00:17:53.732 }, 00:17:53.732 "dhchap_digests": [ 00:17:53.732 "sha256", 00:17:53.732 "sha384", 00:17:53.732 "sha512" 00:17:53.732 ], 00:17:53.732 "dhchap_dhgroups": [ 00:17:53.732 "null", 00:17:53.732 "ffdhe2048", 00:17:53.732 "ffdhe3072", 00:17:53.732 "ffdhe4096", 00:17:53.732 "ffdhe6144", 00:17:53.732 "ffdhe8192" 00:17:53.732 ] 00:17:53.732 } 00:17:53.732 }, 00:17:53.732 { 00:17:53.732 "method": "nvmf_set_max_subsystems", 00:17:53.732 "params": { 00:17:53.732 "max_subsystems": 1024 00:17:53.732 } 00:17:53.732 }, 00:17:53.732 { 00:17:53.732 "method": "nvmf_set_crdt", 00:17:53.732 "params": { 00:17:53.732 "crdt1": 0, 00:17:53.732 "crdt2": 0, 00:17:53.732 "crdt3": 0 00:17:53.732 } 00:17:53.732 }, 00:17:53.732 { 00:17:53.732 "method": "nvmf_create_transport", 00:17:53.732 "params": { 00:17:53.732 "trtype": "TCP", 00:17:53.732 "max_queue_depth": 128, 00:17:53.732 "max_io_qpairs_per_ctrlr": 127, 00:17:53.732 "in_capsule_data_size": 4096, 00:17:53.732 "max_io_size": 131072, 00:17:53.732 "io_unit_size": 131072, 00:17:53.732 "max_aq_depth": 128, 00:17:53.732 "num_shared_buffers": 511, 00:17:53.732 "buf_cache_size": 4294967295, 00:17:53.732 "dif_insert_or_strip": false, 00:17:53.732 "zcopy": false, 00:17:53.732 "c2h_success": false, 00:17:53.732 "sock_priority": 0, 00:17:53.732 "abort_timeout_sec": 1, 00:17:53.732 "ack_timeout": 0, 00:17:53.732 "data_wr_pool_size": 0 00:17:53.732 } 00:17:53.732 }, 00:17:53.732 { 00:17:53.732 "method": "nvmf_create_subsystem", 00:17:53.732 "params": { 00:17:53.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.732 "allow_any_host": false, 00:17:53.732 "serial_number": "SPDK00000000000001", 00:17:53.732 "model_number": "SPDK bdev Controller", 00:17:53.732 "max_namespaces": 10, 00:17:53.732 "min_cntlid": 1, 00:17:53.732 "max_cntlid": 65519, 00:17:53.732 "ana_reporting": false 00:17:53.732 } 00:17:53.732 }, 00:17:53.732 { 00:17:53.732 "method": "nvmf_subsystem_add_host", 00:17:53.732 "params": { 00:17:53.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.732 "host": "nqn.2016-06.io.spdk:host1", 00:17:53.732 "psk": "key0" 00:17:53.732 } 00:17:53.732 }, 00:17:53.732 { 00:17:53.732 "method": "nvmf_subsystem_add_ns", 00:17:53.732 "params": { 00:17:53.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.732 "namespace": { 00:17:53.732 "nsid": 1, 00:17:53.732 "bdev_name": "malloc0", 00:17:53.732 "nguid": "7BD0CF86BCC74A1C8575F4D5F3D29F85", 00:17:53.732 "uuid": "7bd0cf86-bcc7-4a1c-8575-f4d5f3d29f85", 00:17:53.732 "no_auto_visible": false 00:17:53.732 } 00:17:53.732 } 00:17:53.732 }, 00:17:53.732 { 00:17:53.732 "method": "nvmf_subsystem_add_listener", 00:17:53.732 "params": { 00:17:53.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.732 "listen_address": { 00:17:53.732 "trtype": "TCP", 00:17:53.732 "adrfam": "IPv4", 00:17:53.732 "traddr": "10.0.0.2", 00:17:53.732 "trsvcid": "4420" 00:17:53.732 }, 00:17:53.732 "secure_channel": true 00:17:53.732 } 00:17:53.732 } 00:17:53.732 ] 00:17:53.732 } 00:17:53.732 ] 00:17:53.732 }' 00:17:53.732 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:53.991 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:17:53.991 "subsystems": [ 00:17:53.991 { 00:17:53.991 "subsystem": "keyring", 00:17:53.991 "config": [ 00:17:53.991 { 00:17:53.991 "method": "keyring_file_add_key", 00:17:53.991 "params": { 00:17:53.991 "name": "key0", 00:17:53.991 "path": "/tmp/tmp.O7Sx0Vvrdq" 00:17:53.991 } 00:17:53.991 } 00:17:53.991 ] 00:17:53.991 }, 00:17:53.991 { 00:17:53.991 "subsystem": "iobuf", 00:17:53.991 "config": [ 00:17:53.991 { 00:17:53.991 "method": "iobuf_set_options", 00:17:53.991 "params": { 00:17:53.991 "small_pool_count": 8192, 00:17:53.991 "large_pool_count": 1024, 00:17:53.991 "small_bufsize": 8192, 00:17:53.991 "large_bufsize": 135168, 00:17:53.991 "enable_numa": false 00:17:53.991 } 00:17:53.991 } 00:17:53.991 ] 00:17:53.991 }, 00:17:53.991 { 00:17:53.991 "subsystem": "sock", 00:17:53.991 "config": [ 00:17:53.991 { 00:17:53.991 "method": "sock_set_default_impl", 00:17:53.991 "params": { 00:17:53.991 "impl_name": "posix" 00:17:53.991 } 00:17:53.991 }, 00:17:53.991 { 00:17:53.991 "method": "sock_impl_set_options", 00:17:53.991 "params": { 00:17:53.991 "impl_name": "ssl", 00:17:53.991 "recv_buf_size": 4096, 00:17:53.991 "send_buf_size": 4096, 00:17:53.991 "enable_recv_pipe": true, 00:17:53.991 "enable_quickack": false, 00:17:53.991 "enable_placement_id": 0, 00:17:53.991 "enable_zerocopy_send_server": true, 00:17:53.991 "enable_zerocopy_send_client": false, 00:17:53.991 "zerocopy_threshold": 0, 00:17:53.991 "tls_version": 0, 00:17:53.991 "enable_ktls": false 00:17:53.991 } 00:17:53.991 }, 00:17:53.991 { 00:17:53.991 "method": "sock_impl_set_options", 00:17:53.991 "params": { 00:17:53.991 "impl_name": "posix", 00:17:53.991 "recv_buf_size": 2097152, 00:17:53.991 "send_buf_size": 2097152, 00:17:53.991 "enable_recv_pipe": true, 00:17:53.991 "enable_quickack": false, 00:17:53.991 "enable_placement_id": 0, 00:17:53.991 "enable_zerocopy_send_server": true, 00:17:53.991 "enable_zerocopy_send_client": false, 00:17:53.991 "zerocopy_threshold": 0, 00:17:53.991 "tls_version": 0, 00:17:53.991 "enable_ktls": false 00:17:53.991 } 00:17:53.991 } 00:17:53.991 ] 00:17:53.991 }, 00:17:53.991 { 00:17:53.991 "subsystem": "vmd", 00:17:53.991 "config": [] 00:17:53.991 }, 00:17:53.991 { 00:17:53.991 "subsystem": "accel", 00:17:53.991 "config": [ 00:17:53.991 { 00:17:53.991 "method": "accel_set_options", 00:17:53.991 "params": { 00:17:53.991 "small_cache_size": 128, 00:17:53.992 "large_cache_size": 16, 00:17:53.992 "task_count": 2048, 00:17:53.992 "sequence_count": 2048, 00:17:53.992 "buf_count": 2048 00:17:53.992 } 00:17:53.992 } 00:17:53.992 ] 00:17:53.992 }, 00:17:53.992 { 00:17:53.992 "subsystem": "bdev", 00:17:53.992 "config": [ 00:17:53.992 { 00:17:53.992 "method": "bdev_set_options", 00:17:53.992 "params": { 00:17:53.992 "bdev_io_pool_size": 65535, 00:17:53.992 "bdev_io_cache_size": 256, 00:17:53.992 "bdev_auto_examine": true, 00:17:53.992 "iobuf_small_cache_size": 128, 00:17:53.992 "iobuf_large_cache_size": 16 00:17:53.992 } 00:17:53.992 }, 00:17:53.992 { 00:17:53.992 "method": "bdev_raid_set_options", 00:17:53.992 "params": { 00:17:53.992 "process_window_size_kb": 1024, 00:17:53.992 "process_max_bandwidth_mb_sec": 0 00:17:53.992 } 00:17:53.992 }, 00:17:53.992 { 00:17:53.992 "method": "bdev_iscsi_set_options", 00:17:53.992 "params": { 00:17:53.992 "timeout_sec": 30 00:17:53.992 } 00:17:53.992 }, 00:17:53.992 { 00:17:53.992 "method": "bdev_nvme_set_options", 00:17:53.992 "params": { 00:17:53.992 "action_on_timeout": "none", 00:17:53.992 "timeout_us": 0, 00:17:53.992 "timeout_admin_us": 0, 00:17:53.992 "keep_alive_timeout_ms": 10000, 00:17:53.992 "arbitration_burst": 0, 00:17:53.992 "low_priority_weight": 0, 00:17:53.992 "medium_priority_weight": 0, 00:17:53.992 "high_priority_weight": 0, 00:17:53.992 "nvme_adminq_poll_period_us": 10000, 00:17:53.992 "nvme_ioq_poll_period_us": 0, 00:17:53.992 "io_queue_requests": 512, 00:17:53.992 "delay_cmd_submit": true, 00:17:53.992 "transport_retry_count": 4, 00:17:53.992 "bdev_retry_count": 3, 00:17:53.992 "transport_ack_timeout": 0, 00:17:53.992 "ctrlr_loss_timeout_sec": 0, 00:17:53.992 "reconnect_delay_sec": 0, 00:17:53.992 "fast_io_fail_timeout_sec": 0, 00:17:53.992 "disable_auto_failback": false, 00:17:53.992 "generate_uuids": false, 00:17:53.992 "transport_tos": 0, 00:17:53.992 "nvme_error_stat": false, 00:17:53.992 "rdma_srq_size": 0, 00:17:53.992 "io_path_stat": false, 00:17:53.992 "allow_accel_sequence": false, 00:17:53.992 "rdma_max_cq_size": 0, 00:17:53.992 "rdma_cm_event_timeout_ms": 0, 00:17:53.992 "dhchap_digests": [ 00:17:53.992 "sha256", 00:17:53.992 "sha384", 00:17:53.992 "sha512" 00:17:53.992 ], 00:17:53.992 "dhchap_dhgroups": [ 00:17:53.992 "null", 00:17:53.992 "ffdhe2048", 00:17:53.992 "ffdhe3072", 00:17:53.992 "ffdhe4096", 00:17:53.992 "ffdhe6144", 00:17:53.992 "ffdhe8192" 00:17:53.992 ] 00:17:53.992 } 00:17:53.992 }, 00:17:53.992 { 00:17:53.992 "method": "bdev_nvme_attach_controller", 00:17:53.992 "params": { 00:17:53.992 "name": "TLSTEST", 00:17:53.992 "trtype": "TCP", 00:17:53.992 "adrfam": "IPv4", 00:17:53.992 "traddr": "10.0.0.2", 00:17:53.992 "trsvcid": "4420", 00:17:53.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.992 "prchk_reftag": false, 00:17:53.992 "prchk_guard": false, 00:17:53.992 "ctrlr_loss_timeout_sec": 0, 00:17:53.992 "reconnect_delay_sec": 0, 00:17:53.992 "fast_io_fail_timeout_sec": 0, 00:17:53.992 "psk": "key0", 00:17:53.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.992 "hdgst": false, 00:17:53.992 "ddgst": false, 00:17:53.992 "multipath": "multipath" 00:17:53.992 } 00:17:53.992 }, 00:17:53.992 { 00:17:53.992 "method": "bdev_nvme_set_hotplug", 00:17:53.992 "params": { 00:17:53.992 "period_us": 100000, 00:17:53.992 "enable": false 00:17:53.992 } 00:17:53.992 }, 00:17:53.992 { 00:17:53.992 "method": "bdev_wait_for_examine" 00:17:53.992 } 00:17:53.992 ] 00:17:53.992 }, 00:17:53.992 { 00:17:53.992 "subsystem": "nbd", 00:17:53.992 "config": [] 00:17:53.992 } 00:17:53.992 ] 00:17:53.992 }' 00:17:53.992 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3343886 00:17:53.992 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3343886 ']' 00:17:53.992 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3343886 00:17:53.992 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:53.992 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.992 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3343886 00:17:53.992 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:53.992 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:53.992 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3343886' 00:17:53.992 killing process with pid 3343886 00:17:53.992 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3343886 00:17:53.992 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.992 00:17:53.992 Latency(us) 00:17:53.992 [2024-12-13T08:29:06.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.992 [2024-12-13T08:29:06.358Z] =================================================================================================================== 00:17:53.992 [2024-12-13T08:29:06.358Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:53.992 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3343886 00:17:54.251 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3343522 00:17:54.251 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3343522 ']' 00:17:54.251 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3343522 00:17:54.251 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:54.251 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.251 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3343522 00:17:54.251 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:54.251 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:54.251 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3343522' 00:17:54.251 killing process with pid 3343522 00:17:54.251 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3343522 00:17:54.251 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3343522 00:17:54.510 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:54.510 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:54.510 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:54.510 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:17:54.510 "subsystems": [ 00:17:54.510 { 00:17:54.510 "subsystem": "keyring", 00:17:54.510 "config": [ 00:17:54.510 { 00:17:54.510 "method": "keyring_file_add_key", 00:17:54.510 "params": { 00:17:54.510 "name": "key0", 00:17:54.510 "path": "/tmp/tmp.O7Sx0Vvrdq" 00:17:54.510 } 00:17:54.510 } 00:17:54.510 ] 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "subsystem": "iobuf", 00:17:54.510 "config": [ 00:17:54.510 { 00:17:54.510 "method": "iobuf_set_options", 00:17:54.510 "params": { 00:17:54.510 "small_pool_count": 8192, 00:17:54.510 "large_pool_count": 1024, 00:17:54.510 "small_bufsize": 8192, 00:17:54.510 "large_bufsize": 135168, 00:17:54.510 "enable_numa": false 00:17:54.510 } 00:17:54.510 } 00:17:54.510 ] 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "subsystem": "sock", 00:17:54.510 "config": [ 00:17:54.510 { 00:17:54.510 "method": "sock_set_default_impl", 00:17:54.510 "params": { 00:17:54.510 "impl_name": "posix" 00:17:54.510 } 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "method": "sock_impl_set_options", 00:17:54.510 "params": { 00:17:54.510 "impl_name": "ssl", 00:17:54.510 "recv_buf_size": 4096, 00:17:54.510 "send_buf_size": 4096, 00:17:54.510 "enable_recv_pipe": true, 00:17:54.510 "enable_quickack": false, 00:17:54.510 "enable_placement_id": 0, 00:17:54.510 "enable_zerocopy_send_server": true, 00:17:54.510 "enable_zerocopy_send_client": false, 00:17:54.510 "zerocopy_threshold": 0, 00:17:54.510 "tls_version": 0, 00:17:54.510 "enable_ktls": false 00:17:54.510 } 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "method": "sock_impl_set_options", 00:17:54.510 "params": { 00:17:54.510 "impl_name": "posix", 00:17:54.510 "recv_buf_size": 2097152, 00:17:54.510 "send_buf_size": 2097152, 00:17:54.510 "enable_recv_pipe": true, 00:17:54.510 "enable_quickack": false, 00:17:54.510 "enable_placement_id": 0, 00:17:54.510 "enable_zerocopy_send_server": true, 00:17:54.510 "enable_zerocopy_send_client": false, 00:17:54.510 "zerocopy_threshold": 0, 00:17:54.510 "tls_version": 0, 00:17:54.510 "enable_ktls": false 00:17:54.510 } 00:17:54.510 } 00:17:54.510 ] 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "subsystem": "vmd", 00:17:54.510 "config": [] 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "subsystem": "accel", 00:17:54.510 "config": [ 00:17:54.510 { 00:17:54.510 "method": "accel_set_options", 00:17:54.510 "params": { 00:17:54.510 "small_cache_size": 128, 00:17:54.510 "large_cache_size": 16, 00:17:54.510 "task_count": 2048, 00:17:54.510 "sequence_count": 2048, 00:17:54.510 "buf_count": 2048 00:17:54.510 } 00:17:54.510 } 00:17:54.510 ] 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "subsystem": "bdev", 00:17:54.510 "config": [ 00:17:54.510 { 00:17:54.510 "method": "bdev_set_options", 00:17:54.510 "params": { 00:17:54.510 "bdev_io_pool_size": 65535, 00:17:54.510 "bdev_io_cache_size": 256, 00:17:54.510 "bdev_auto_examine": true, 00:17:54.510 "iobuf_small_cache_size": 128, 00:17:54.510 "iobuf_large_cache_size": 16 00:17:54.510 } 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "method": "bdev_raid_set_options", 00:17:54.510 "params": { 00:17:54.510 "process_window_size_kb": 1024, 00:17:54.510 "process_max_bandwidth_mb_sec": 0 00:17:54.510 } 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "method": "bdev_iscsi_set_options", 00:17:54.510 "params": { 00:17:54.510 "timeout_sec": 30 00:17:54.510 } 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "method": "bdev_nvme_set_options", 00:17:54.510 "params": { 00:17:54.510 "action_on_timeout": "none", 00:17:54.510 "timeout_us": 0, 00:17:54.510 "timeout_admin_us": 0, 00:17:54.510 "keep_alive_timeout_ms": 10000, 00:17:54.510 "arbitration_burst": 0, 00:17:54.511 "low_priority_weight": 0, 00:17:54.511 "medium_priority_weight": 0, 00:17:54.511 "high_priority_weight": 0, 00:17:54.511 "nvme_adminq_poll_period_us": 10000, 00:17:54.511 "nvme_ioq_poll_period_us": 0, 00:17:54.511 "io_queue_requests": 0, 00:17:54.511 "delay_cmd_submit": true, 00:17:54.511 "transport_retry_count": 4, 00:17:54.511 "bdev_retry_count": 3, 00:17:54.511 "transport_ack_timeout": 0, 00:17:54.511 "ctrlr_loss_timeout_sec": 0, 00:17:54.511 "reconnect_delay_sec": 0, 00:17:54.511 "fast_io_fail_timeout_sec": 0, 00:17:54.511 "disable_auto_failback": false, 00:17:54.511 "generate_uuids": false, 00:17:54.511 "transport_tos": 0, 00:17:54.511 "nvme_error_stat": false, 00:17:54.511 "rdma_srq_size": 0, 00:17:54.511 "io_path_stat": false, 00:17:54.511 "allow_accel_sequence": false, 00:17:54.511 "rdma_max_cq_size": 0, 00:17:54.511 "rdma_cm_event_timeout_ms": 0, 00:17:54.511 "dhchap_digests": [ 00:17:54.511 "sha256", 00:17:54.511 "sha384", 00:17:54.511 "sha512" 00:17:54.511 ], 00:17:54.511 "dhchap_dhgroups": [ 00:17:54.511 "null", 00:17:54.511 "ffdhe2048", 00:17:54.511 "ffdhe3072", 00:17:54.511 "ffdhe4096", 00:17:54.511 "ffdhe6144", 00:17:54.511 "ffdhe8192" 00:17:54.511 ] 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "bdev_nvme_set_hotplug", 00:17:54.511 "params": { 00:17:54.511 "period_us": 100000, 00:17:54.511 "enable": false 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "bdev_malloc_create", 00:17:54.511 "params": { 00:17:54.511 "name": "malloc0", 00:17:54.511 "num_blocks": 8192, 00:17:54.511 "block_size": 4096, 00:17:54.511 "physical_block_size": 4096, 00:17:54.511 "uuid": "7bd0cf86-bcc7-4a1c-8575-f4d5f3d29f85", 00:17:54.511 "optimal_io_boundary": 0, 00:17:54.511 "md_size": 0, 00:17:54.511 "dif_type": 0, 00:17:54.511 "dif_is_head_of_md": false, 00:17:54.511 "dif_pi_format": 0 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "bdev_wait_for_examine" 00:17:54.511 } 00:17:54.511 ] 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "subsystem": "nbd", 00:17:54.511 "config": [] 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "subsystem": "scheduler", 00:17:54.511 "config": [ 00:17:54.511 { 00:17:54.511 "method": "framework_set_scheduler", 00:17:54.511 "params": { 00:17:54.511 "name": "static" 00:17:54.511 } 00:17:54.511 } 00:17:54.511 ] 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "subsystem": "nvmf", 00:17:54.511 "config": [ 00:17:54.511 { 00:17:54.511 "method": "nvmf_set_config", 00:17:54.511 "params": { 00:17:54.511 "discovery_filter": "match_any", 00:17:54.511 "admin_cmd_passthru": { 00:17:54.511 "identify_ctrlr": false 00:17:54.511 }, 00:17:54.511 "dhchap_digests": [ 00:17:54.511 "sha256", 00:17:54.511 "sha384", 00:17:54.511 "sha512" 00:17:54.511 ], 00:17:54.511 "dhchap_dhgroups": [ 00:17:54.511 "null", 00:17:54.511 "ffdhe2048", 00:17:54.511 "ffdhe3072", 00:17:54.511 "ffdhe4096", 00:17:54.511 "ffdhe6144", 00:17:54.511 "ffdhe8192" 00:17:54.511 ] 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "nvmf_set_max_subsystems", 00:17:54.511 "params": { 00:17:54.511 "max_subsystems": 1024 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "nvmf_set_crdt", 00:17:54.511 "params": { 00:17:54.511 "crdt1": 0, 00:17:54.511 "crdt2": 0, 00:17:54.511 "crdt3": 0 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "nvmf_create_transport", 00:17:54.511 "params": { 00:17:54.511 "trtype": "TCP", 00:17:54.511 "max_queue_depth": 128, 00:17:54.511 "max_io_qpairs_per_ctrlr": 127, 00:17:54.511 "in_capsule_data_size": 4096, 00:17:54.511 "max_io_size": 131072, 00:17:54.511 "io_unit_size": 131072, 00:17:54.511 "max_aq_depth": 128, 00:17:54.511 "num_shared_buffers": 511, 00:17:54.511 "buf_cache_size": 4294967295, 00:17:54.511 "dif_insert_or_strip": false, 00:17:54.511 "zcopy": false, 00:17:54.511 "c2h_success": false, 00:17:54.511 "sock_priority": 0, 00:17:54.511 "abort_timeout_sec": 1, 00:17:54.511 "ack_timeout": 0, 00:17:54.511 "data_wr_pool_size": 0 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "nvmf_create_subsystem", 00:17:54.511 "params": { 00:17:54.511 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.511 "allow_any_host": false, 00:17:54.511 "serial_number": "SPDK00000000000001", 00:17:54.511 "model_number": "SPDK bdev Controller", 00:17:54.511 "max_namespaces": 10, 00:17:54.511 "min_cntlid": 1, 00:17:54.511 "max_cntlid": 65519, 00:17:54.511 "ana_reporting": false 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "nvmf_subsystem_add_host", 00:17:54.511 "params": { 00:17:54.511 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.511 "host": "nqn.2016-06.io.spdk:host1", 00:17:54.511 "psk": "key0" 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "nvmf_subsystem_add_ns", 00:17:54.511 "params": { 00:17:54.511 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.511 "namespace": { 00:17:54.511 "nsid": 1, 00:17:54.511 "bdev_name": "malloc0", 00:17:54.511 "nguid": "7BD0CF86BCC74A1C8575F4D5F3D29F85", 00:17:54.511 "uuid": "7bd0cf86-bcc7-4a1c-8575-f4d5f3d29f85", 00:17:54.511 "no_auto_visible": false 00:17:54.511 } 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "nvmf_subsystem_add_listener", 00:17:54.511 "params": { 00:17:54.511 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.511 "listen_address": { 00:17:54.511 "trtype": "TCP", 00:17:54.511 "adrfam": "IPv4", 00:17:54.511 "traddr": "10.0.0.2", 00:17:54.511 "trsvcid": "4420" 00:17:54.511 }, 00:17:54.511 "secure_channel": true 00:17:54.511 } 00:17:54.511 } 00:17:54.511 ] 00:17:54.511 } 00:17:54.511 ] 00:17:54.511 }' 00:17:54.511 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.511 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3344210 00:17:54.511 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:54.511 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3344210 00:17:54.511 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3344210 ']' 00:17:54.511 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.511 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.511 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.511 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.512 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.512 [2024-12-13 09:29:06.688124] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:54.512 [2024-12-13 09:29:06.688170] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.512 [2024-12-13 09:29:06.752823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.512 [2024-12-13 09:29:06.791390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.512 [2024-12-13 09:29:06.791423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.512 [2024-12-13 09:29:06.791430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.512 [2024-12-13 09:29:06.791436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.512 [2024-12-13 09:29:06.791440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.512 [2024-12-13 09:29:06.791961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.770 [2024-12-13 09:29:07.004140] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.770 [2024-12-13 09:29:07.036169] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:54.770 [2024-12-13 09:29:07.036375] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.336 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.336 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:55.336 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:55.336 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:55.336 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.336 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.336 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3344246 00:17:55.336 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3344246 /var/tmp/bdevperf.sock 00:17:55.336 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3344246 ']' 00:17:55.336 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.336 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:55.336 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.336 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.336 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:17:55.336 "subsystems": [ 00:17:55.336 { 00:17:55.336 "subsystem": "keyring", 00:17:55.336 "config": [ 00:17:55.336 { 00:17:55.336 "method": "keyring_file_add_key", 00:17:55.336 "params": { 00:17:55.336 "name": "key0", 00:17:55.336 "path": "/tmp/tmp.O7Sx0Vvrdq" 00:17:55.336 } 00:17:55.336 } 00:17:55.336 ] 00:17:55.336 }, 00:17:55.336 { 00:17:55.336 "subsystem": "iobuf", 00:17:55.336 "config": [ 00:17:55.336 { 00:17:55.336 "method": "iobuf_set_options", 00:17:55.336 "params": { 00:17:55.336 "small_pool_count": 8192, 00:17:55.336 "large_pool_count": 1024, 00:17:55.336 "small_bufsize": 8192, 00:17:55.336 "large_bufsize": 135168, 00:17:55.336 "enable_numa": false 00:17:55.336 } 00:17:55.336 } 00:17:55.336 ] 00:17:55.336 }, 00:17:55.336 { 00:17:55.336 "subsystem": "sock", 00:17:55.336 "config": [ 00:17:55.336 { 00:17:55.336 "method": "sock_set_default_impl", 00:17:55.336 "params": { 00:17:55.336 "impl_name": "posix" 00:17:55.336 } 00:17:55.336 }, 00:17:55.336 { 00:17:55.336 "method": "sock_impl_set_options", 00:17:55.336 "params": { 00:17:55.336 "impl_name": "ssl", 00:17:55.336 "recv_buf_size": 4096, 00:17:55.336 "send_buf_size": 4096, 00:17:55.336 "enable_recv_pipe": true, 00:17:55.336 "enable_quickack": false, 00:17:55.336 "enable_placement_id": 0, 00:17:55.336 "enable_zerocopy_send_server": true, 00:17:55.336 "enable_zerocopy_send_client": false, 00:17:55.336 "zerocopy_threshold": 0, 00:17:55.336 "tls_version": 0, 00:17:55.336 "enable_ktls": false 00:17:55.336 } 00:17:55.336 }, 00:17:55.336 { 00:17:55.336 "method": "sock_impl_set_options", 00:17:55.336 "params": { 00:17:55.336 "impl_name": "posix", 00:17:55.336 "recv_buf_size": 2097152, 00:17:55.336 "send_buf_size": 2097152, 00:17:55.336 "enable_recv_pipe": true, 00:17:55.336 "enable_quickack": false, 00:17:55.336 "enable_placement_id": 0, 00:17:55.336 "enable_zerocopy_send_server": true, 00:17:55.336 "enable_zerocopy_send_client": false, 00:17:55.336 "zerocopy_threshold": 0, 00:17:55.336 "tls_version": 0, 00:17:55.336 "enable_ktls": false 00:17:55.336 } 00:17:55.336 } 00:17:55.336 ] 00:17:55.336 }, 00:17:55.336 { 00:17:55.336 "subsystem": "vmd", 00:17:55.337 "config": [] 00:17:55.337 }, 00:17:55.337 { 00:17:55.337 "subsystem": "accel", 00:17:55.337 "config": [ 00:17:55.337 { 00:17:55.337 "method": "accel_set_options", 00:17:55.337 "params": { 00:17:55.337 "small_cache_size": 128, 00:17:55.337 "large_cache_size": 16, 00:17:55.337 "task_count": 2048, 00:17:55.337 "sequence_count": 2048, 00:17:55.337 "buf_count": 2048 00:17:55.337 } 00:17:55.337 } 00:17:55.337 ] 00:17:55.337 }, 00:17:55.337 { 00:17:55.337 "subsystem": "bdev", 00:17:55.337 "config": [ 00:17:55.337 { 00:17:55.337 "method": "bdev_set_options", 00:17:55.337 "params": { 00:17:55.337 "bdev_io_pool_size": 65535, 00:17:55.337 "bdev_io_cache_size": 256, 00:17:55.337 "bdev_auto_examine": true, 00:17:55.337 "iobuf_small_cache_size": 128, 00:17:55.337 "iobuf_large_cache_size": 16 00:17:55.337 } 00:17:55.337 }, 00:17:55.337 { 00:17:55.337 "method": "bdev_raid_set_options", 00:17:55.337 "params": { 00:17:55.337 "process_window_size_kb": 1024, 00:17:55.337 "process_max_bandwidth_mb_sec": 0 00:17:55.337 } 00:17:55.337 }, 00:17:55.337 { 00:17:55.337 "method": "bdev_iscsi_set_options", 00:17:55.337 "params": { 00:17:55.337 "timeout_sec": 30 00:17:55.337 } 00:17:55.337 }, 00:17:55.337 { 00:17:55.337 "method": "bdev_nvme_set_options", 00:17:55.337 "params": { 00:17:55.337 "action_on_timeout": "none", 00:17:55.337 "timeout_us": 0, 00:17:55.337 "timeout_admin_us": 0, 00:17:55.337 "keep_alive_timeout_ms": 10000, 00:17:55.337 "arbitration_burst": 0, 00:17:55.337 "low_priority_weight": 0, 00:17:55.337 "medium_priority_weight": 0, 00:17:55.337 "high_priority_weight": 0, 00:17:55.337 "nvme_adminq_poll_period_us": 10000, 00:17:55.337 "nvme_ioq_poll_period_us": 0, 00:17:55.337 "io_queue_requests": 512, 00:17:55.337 "delay_cmd_submit": true, 00:17:55.337 "transport_retry_count": 4, 00:17:55.337 "bdev_retry_count": 3, 00:17:55.337 "transport_ack_timeout": 0, 00:17:55.337 "ctrlr_loss_timeout_sec": 0, 00:17:55.337 "reconnect_delay_sec": 0, 00:17:55.337 "fast_io_fail_timeout_sec": 0, 00:17:55.337 "disable_auto_failback": false, 00:17:55.337 "generate_uuids": false, 00:17:55.337 "transport_tos": 0, 00:17:55.337 "nvme_error_stat": false, 00:17:55.337 "rdma_srq_size": 0, 00:17:55.337 "io_path_stat": false, 00:17:55.337 "allow_accel_sequence": false, 00:17:55.337 "rdma_max_cq_size": 0, 00:17:55.337 "rdma_cm_event_timeout_ms": 0, 00:17:55.337 "dhchap_digests": [ 00:17:55.337 "sha256", 00:17:55.337 "sha384", 00:17:55.337 "sha512" 00:17:55.337 ], 00:17:55.337 "dhchap_dhgroups": [ 00:17:55.337 "null", 00:17:55.337 "ffdhe2048", 00:17:55.337 "ffdhe3072", 00:17:55.337 "ffdhe4096", 00:17:55.337 "ffdhe6144", 00:17:55.337 "ffdhe8192" 00:17:55.337 ] 00:17:55.337 } 00:17:55.337 }, 00:17:55.337 { 00:17:55.337 "method": "bdev_nvme_attach_controller", 00:17:55.337 "params": { 00:17:55.337 "name": "TLSTEST", 00:17:55.337 "trtype": "TCP", 00:17:55.337 "adrfam": "IPv4", 00:17:55.337 "traddr": "10.0.0.2", 00:17:55.337 "trsvcid": "4420", 00:17:55.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.337 "prchk_reftag": false, 00:17:55.337 "prchk_guard": false, 00:17:55.337 "ctrlr_loss_timeout_sec": 0, 00:17:55.337 "reconnect_delay_sec": 0, 00:17:55.337 "fast_io_fail_timeout_sec": 0, 00:17:55.337 "psk": "key0", 00:17:55.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:55.337 "hdgst": false, 00:17:55.337 "ddgst": false, 00:17:55.337 "multipath": "multipath" 00:17:55.337 } 00:17:55.337 }, 00:17:55.337 { 00:17:55.337 "method": "bdev_nvme_set_hotplug", 00:17:55.337 "params": { 00:17:55.337 "period_us": 100000, 00:17:55.337 "enable": false 00:17:55.337 } 00:17:55.337 }, 00:17:55.337 { 00:17:55.337 "method": "bdev_wait_for_examine" 00:17:55.337 } 00:17:55.337 ] 00:17:55.337 }, 00:17:55.337 { 00:17:55.337 "subsystem": "nbd", 00:17:55.337 "config": [] 00:17:55.337 } 00:17:55.337 ] 00:17:55.337 }' 00:17:55.337 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.337 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.337 [2024-12-13 09:29:07.591139] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:17:55.337 [2024-12-13 09:29:07.591188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344246 ] 00:17:55.337 [2024-12-13 09:29:07.650689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.337 [2024-12-13 09:29:07.689583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.596 [2024-12-13 09:29:07.843383] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.162 09:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.162 09:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:56.162 09:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:56.162 Running I/O for 10 seconds... 00:17:58.472 5406.00 IOPS, 21.12 MiB/s [2024-12-13T08:29:11.773Z] 5463.50 IOPS, 21.34 MiB/s [2024-12-13T08:29:12.708Z] 5452.33 IOPS, 21.30 MiB/s [2024-12-13T08:29:13.643Z] 5452.00 IOPS, 21.30 MiB/s [2024-12-13T08:29:14.578Z] 5439.80 IOPS, 21.25 MiB/s [2024-12-13T08:29:15.953Z] 5463.50 IOPS, 21.34 MiB/s [2024-12-13T08:29:16.520Z] 5422.00 IOPS, 21.18 MiB/s [2024-12-13T08:29:17.896Z] 5446.12 IOPS, 21.27 MiB/s [2024-12-13T08:29:18.832Z] 5445.33 IOPS, 21.27 MiB/s [2024-12-13T08:29:18.832Z] 5440.60 IOPS, 21.25 MiB/s 00:18:06.466 Latency(us) 00:18:06.466 [2024-12-13T08:29:18.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.466 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:06.466 Verification LBA range: start 0x0 length 0x2000 00:18:06.466 TLSTESTn1 : 10.04 5433.61 21.23 0.00 0.00 23500.84 6553.60 34952.53 00:18:06.466 [2024-12-13T08:29:18.832Z] =================================================================================================================== 00:18:06.466 [2024-12-13T08:29:18.832Z] Total : 5433.61 21.23 0.00 0.00 23500.84 6553.60 34952.53 00:18:06.466 { 00:18:06.466 "results": [ 00:18:06.466 { 00:18:06.466 "job": "TLSTESTn1", 00:18:06.466 "core_mask": "0x4", 00:18:06.466 "workload": "verify", 00:18:06.466 "status": "finished", 00:18:06.466 "verify_range": { 00:18:06.466 "start": 0, 00:18:06.466 "length": 8192 00:18:06.466 }, 00:18:06.466 "queue_depth": 128, 00:18:06.466 "io_size": 4096, 00:18:06.466 "runtime": 10.036242, 00:18:06.466 "iops": 5433.607519627367, 00:18:06.466 "mibps": 21.225029373544402, 00:18:06.466 "io_failed": 0, 00:18:06.466 "io_timeout": 0, 00:18:06.466 "avg_latency_us": 23500.835346635897, 00:18:06.466 "min_latency_us": 6553.6, 00:18:06.466 "max_latency_us": 34952.53333333333 00:18:06.466 } 00:18:06.466 ], 00:18:06.466 "core_count": 1 00:18:06.466 } 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3344246 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3344246 ']' 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3344246 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3344246 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3344246' 00:18:06.466 killing process with pid 3344246 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3344246 00:18:06.466 Received shutdown signal, test time was about 10.000000 seconds 00:18:06.466 00:18:06.466 Latency(us) 00:18:06.466 [2024-12-13T08:29:18.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.466 [2024-12-13T08:29:18.832Z] =================================================================================================================== 00:18:06.466 [2024-12-13T08:29:18.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3344246 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3344210 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3344210 ']' 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3344210 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.466 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3344210 00:18:06.726 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:06.726 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:06.726 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3344210' 00:18:06.726 killing process with pid 3344210 00:18:06.726 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3344210 00:18:06.726 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3344210 00:18:06.726 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:06.726 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:06.726 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.726 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.726 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3346224 00:18:06.726 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:06.726 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3346224 00:18:06.726 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3346224 ']' 00:18:06.726 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.726 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.726 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.726 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.726 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.726 [2024-12-13 09:29:19.058898] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:06.726 [2024-12-13 09:29:19.058947] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.984 [2024-12-13 09:29:19.125677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.984 [2024-12-13 09:29:19.163124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.984 [2024-12-13 09:29:19.163162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.984 [2024-12-13 09:29:19.163168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.984 [2024-12-13 09:29:19.163175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.984 [2024-12-13 09:29:19.163179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.984 [2024-12-13 09:29:19.163730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.984 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.984 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:06.984 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:06.984 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.984 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.984 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.984 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.O7Sx0Vvrdq 00:18:06.984 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.O7Sx0Vvrdq 00:18:06.985 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:07.243 [2024-12-13 09:29:19.463365] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.243 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:07.501 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:07.501 [2024-12-13 09:29:19.832301] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:07.501 [2024-12-13 09:29:19.832510] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.501 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:07.759 malloc0 00:18:07.759 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:08.017 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.O7Sx0Vvrdq 00:18:08.275 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:08.275 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3346500 00:18:08.275 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:08.275 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.275 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3346500 /var/tmp/bdevperf.sock 00:18:08.275 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3346500 ']' 00:18:08.275 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.275 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.275 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.535 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.535 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.535 [2024-12-13 09:29:20.684483] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:08.535 [2024-12-13 09:29:20.684534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346500 ] 00:18:08.535 [2024-12-13 09:29:20.747283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.535 [2024-12-13 09:29:20.786419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.535 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.535 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:08.535 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.O7Sx0Vvrdq 00:18:08.794 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:09.052 [2024-12-13 09:29:21.225944] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.052 nvme0n1 00:18:09.052 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:09.052 Running I/O for 1 seconds... 00:18:10.427 5411.00 IOPS, 21.14 MiB/s 00:18:10.427 Latency(us) 00:18:10.427 [2024-12-13T08:29:22.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.427 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:10.427 Verification LBA range: start 0x0 length 0x2000 00:18:10.427 nvme0n1 : 1.02 5453.12 21.30 0.00 0.00 23285.68 6210.32 31706.94 00:18:10.427 [2024-12-13T08:29:22.793Z] =================================================================================================================== 00:18:10.427 [2024-12-13T08:29:22.793Z] Total : 5453.12 21.30 0.00 0.00 23285.68 6210.32 31706.94 00:18:10.427 { 00:18:10.427 "results": [ 00:18:10.427 { 00:18:10.427 "job": "nvme0n1", 00:18:10.427 "core_mask": "0x2", 00:18:10.427 "workload": "verify", 00:18:10.427 "status": "finished", 00:18:10.427 "verify_range": { 00:18:10.427 "start": 0, 00:18:10.427 "length": 8192 00:18:10.427 }, 00:18:10.427 "queue_depth": 128, 00:18:10.427 "io_size": 4096, 00:18:10.427 "runtime": 1.015749, 00:18:10.427 "iops": 5453.118831522354, 00:18:10.427 "mibps": 21.301245435634197, 00:18:10.427 "io_failed": 0, 00:18:10.427 "io_timeout": 0, 00:18:10.427 "avg_latency_us": 23285.684114890944, 00:18:10.427 "min_latency_us": 6210.31619047619, 00:18:10.427 "max_latency_us": 31706.94095238095 00:18:10.427 } 00:18:10.427 ], 00:18:10.427 "core_count": 1 00:18:10.427 } 00:18:10.427 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3346500 00:18:10.427 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3346500 ']' 00:18:10.427 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3346500 00:18:10.427 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:10.427 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.427 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3346500 00:18:10.427 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:10.428 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:10.428 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3346500' 00:18:10.428 killing process with pid 3346500 00:18:10.428 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3346500 00:18:10.428 Received shutdown signal, test time was about 1.000000 seconds 00:18:10.428 00:18:10.428 Latency(us) 00:18:10.428 [2024-12-13T08:29:22.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.428 [2024-12-13T08:29:22.794Z] =================================================================================================================== 00:18:10.428 [2024-12-13T08:29:22.794Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.428 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3346500 00:18:10.428 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3346224 00:18:10.428 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3346224 ']' 00:18:10.428 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3346224 00:18:10.428 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:10.428 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.428 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3346224 00:18:10.428 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.428 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.428 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3346224' 00:18:10.428 killing process with pid 3346224 00:18:10.428 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3346224 00:18:10.428 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3346224 00:18:10.686 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:10.686 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:10.686 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:10.686 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.686 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3346754 00:18:10.686 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3346754 00:18:10.686 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:10.686 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3346754 ']' 00:18:10.686 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.686 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.686 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.686 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.686 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.686 [2024-12-13 09:29:22.904620] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:10.686 [2024-12-13 09:29:22.904670] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.686 [2024-12-13 09:29:22.971725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.686 [2024-12-13 09:29:23.009282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.687 [2024-12-13 09:29:23.009316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.687 [2024-12-13 09:29:23.009322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.687 [2024-12-13 09:29:23.009328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.687 [2024-12-13 09:29:23.009333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.687 [2024-12-13 09:29:23.009873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.945 [2024-12-13 09:29:23.146616] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.945 malloc0 00:18:10.945 [2024-12-13 09:29:23.174921] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:10.945 [2024-12-13 09:29:23.175139] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3346975 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3346975 /var/tmp/bdevperf.sock 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3346975 ']' 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.945 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.945 [2024-12-13 09:29:23.252350] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:10.945 [2024-12-13 09:29:23.252391] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346975 ] 00:18:11.252 [2024-12-13 09:29:23.315143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.252 [2024-12-13 09:29:23.355062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.252 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.252 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:11.252 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.O7Sx0Vvrdq 00:18:11.591 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:11.591 [2024-12-13 09:29:23.814985] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.591 nvme0n1 00:18:11.591 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:11.849 Running I/O for 1 seconds... 00:18:12.784 5373.00 IOPS, 20.99 MiB/s 00:18:12.784 Latency(us) 00:18:12.784 [2024-12-13T08:29:25.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.784 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:12.784 Verification LBA range: start 0x0 length 0x2000 00:18:12.784 nvme0n1 : 1.02 5408.68 21.13 0.00 0.00 23478.57 4712.35 41943.04 00:18:12.784 [2024-12-13T08:29:25.150Z] =================================================================================================================== 00:18:12.784 [2024-12-13T08:29:25.150Z] Total : 5408.68 21.13 0.00 0.00 23478.57 4712.35 41943.04 00:18:12.784 { 00:18:12.784 "results": [ 00:18:12.784 { 00:18:12.784 "job": "nvme0n1", 00:18:12.784 "core_mask": "0x2", 00:18:12.784 "workload": "verify", 00:18:12.784 "status": "finished", 00:18:12.784 "verify_range": { 00:18:12.784 "start": 0, 00:18:12.784 "length": 8192 00:18:12.784 }, 00:18:12.784 "queue_depth": 128, 00:18:12.784 "io_size": 4096, 00:18:12.784 "runtime": 1.017068, 00:18:12.784 "iops": 5408.684571729717, 00:18:12.784 "mibps": 21.12767410831921, 00:18:12.784 "io_failed": 0, 00:18:12.784 "io_timeout": 0, 00:18:12.784 "avg_latency_us": 23478.57082954614, 00:18:12.784 "min_latency_us": 4712.350476190476, 00:18:12.784 "max_latency_us": 41943.04 00:18:12.784 } 00:18:12.784 ], 00:18:12.784 "core_count": 1 00:18:12.784 } 00:18:12.784 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:12.784 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.784 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.042 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.042 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:13.042 "subsystems": [ 00:18:13.042 { 00:18:13.042 "subsystem": "keyring", 00:18:13.042 "config": [ 00:18:13.042 { 00:18:13.042 "method": "keyring_file_add_key", 00:18:13.042 "params": { 00:18:13.042 "name": "key0", 00:18:13.042 "path": "/tmp/tmp.O7Sx0Vvrdq" 00:18:13.042 } 00:18:13.042 } 00:18:13.042 ] 00:18:13.042 }, 00:18:13.042 { 00:18:13.042 "subsystem": "iobuf", 00:18:13.042 "config": [ 00:18:13.042 { 00:18:13.042 "method": "iobuf_set_options", 00:18:13.042 "params": { 00:18:13.042 "small_pool_count": 8192, 00:18:13.042 "large_pool_count": 1024, 00:18:13.042 "small_bufsize": 8192, 00:18:13.042 "large_bufsize": 135168, 00:18:13.042 "enable_numa": false 00:18:13.042 } 00:18:13.042 } 00:18:13.042 ] 00:18:13.042 }, 00:18:13.042 { 00:18:13.042 "subsystem": "sock", 00:18:13.042 "config": [ 00:18:13.042 { 00:18:13.042 "method": "sock_set_default_impl", 00:18:13.042 "params": { 00:18:13.042 "impl_name": "posix" 00:18:13.042 } 00:18:13.042 }, 00:18:13.042 { 00:18:13.042 "method": "sock_impl_set_options", 00:18:13.042 "params": { 00:18:13.042 "impl_name": "ssl", 00:18:13.042 "recv_buf_size": 4096, 00:18:13.042 "send_buf_size": 4096, 00:18:13.042 "enable_recv_pipe": true, 00:18:13.042 "enable_quickack": false, 00:18:13.042 "enable_placement_id": 0, 00:18:13.042 "enable_zerocopy_send_server": true, 00:18:13.042 "enable_zerocopy_send_client": false, 00:18:13.042 "zerocopy_threshold": 0, 00:18:13.042 "tls_version": 0, 00:18:13.042 "enable_ktls": false 00:18:13.042 } 00:18:13.042 }, 00:18:13.042 { 00:18:13.042 "method": "sock_impl_set_options", 00:18:13.042 "params": { 00:18:13.042 "impl_name": "posix", 00:18:13.042 "recv_buf_size": 2097152, 00:18:13.042 "send_buf_size": 2097152, 00:18:13.042 "enable_recv_pipe": true, 00:18:13.042 "enable_quickack": false, 00:18:13.042 "enable_placement_id": 0, 00:18:13.042 "enable_zerocopy_send_server": true, 00:18:13.042 "enable_zerocopy_send_client": false, 00:18:13.042 "zerocopy_threshold": 0, 00:18:13.042 "tls_version": 0, 00:18:13.042 "enable_ktls": false 00:18:13.042 } 00:18:13.042 } 00:18:13.042 ] 00:18:13.042 }, 00:18:13.042 { 00:18:13.042 "subsystem": "vmd", 00:18:13.042 "config": [] 00:18:13.042 }, 00:18:13.042 { 00:18:13.042 "subsystem": "accel", 00:18:13.042 "config": [ 00:18:13.042 { 00:18:13.042 "method": "accel_set_options", 00:18:13.042 "params": { 00:18:13.042 "small_cache_size": 128, 00:18:13.042 "large_cache_size": 16, 00:18:13.042 "task_count": 2048, 00:18:13.042 "sequence_count": 2048, 00:18:13.042 "buf_count": 2048 00:18:13.042 } 00:18:13.042 } 00:18:13.042 ] 00:18:13.042 }, 00:18:13.042 { 00:18:13.042 "subsystem": "bdev", 00:18:13.042 "config": [ 00:18:13.042 { 00:18:13.042 "method": "bdev_set_options", 00:18:13.042 "params": { 00:18:13.042 "bdev_io_pool_size": 65535, 00:18:13.042 "bdev_io_cache_size": 256, 00:18:13.042 "bdev_auto_examine": true, 00:18:13.042 "iobuf_small_cache_size": 128, 00:18:13.042 "iobuf_large_cache_size": 16 00:18:13.042 } 00:18:13.042 }, 00:18:13.042 { 00:18:13.042 "method": "bdev_raid_set_options", 00:18:13.042 "params": { 00:18:13.042 "process_window_size_kb": 1024, 00:18:13.042 "process_max_bandwidth_mb_sec": 0 00:18:13.042 } 00:18:13.042 }, 00:18:13.042 { 00:18:13.042 "method": "bdev_iscsi_set_options", 00:18:13.042 "params": { 00:18:13.042 "timeout_sec": 30 00:18:13.042 } 00:18:13.042 }, 00:18:13.042 { 00:18:13.042 "method": "bdev_nvme_set_options", 00:18:13.042 "params": { 00:18:13.042 "action_on_timeout": "none", 00:18:13.042 "timeout_us": 0, 00:18:13.042 "timeout_admin_us": 0, 00:18:13.042 "keep_alive_timeout_ms": 10000, 00:18:13.042 "arbitration_burst": 0, 00:18:13.042 "low_priority_weight": 0, 00:18:13.042 "medium_priority_weight": 0, 00:18:13.042 "high_priority_weight": 0, 00:18:13.043 "nvme_adminq_poll_period_us": 10000, 00:18:13.043 "nvme_ioq_poll_period_us": 0, 00:18:13.043 "io_queue_requests": 0, 00:18:13.043 "delay_cmd_submit": true, 00:18:13.043 "transport_retry_count": 4, 00:18:13.043 "bdev_retry_count": 3, 00:18:13.043 "transport_ack_timeout": 0, 00:18:13.043 "ctrlr_loss_timeout_sec": 0, 00:18:13.043 "reconnect_delay_sec": 0, 00:18:13.043 "fast_io_fail_timeout_sec": 0, 00:18:13.043 "disable_auto_failback": false, 00:18:13.043 "generate_uuids": false, 00:18:13.043 "transport_tos": 0, 00:18:13.043 "nvme_error_stat": false, 00:18:13.043 "rdma_srq_size": 0, 00:18:13.043 "io_path_stat": false, 00:18:13.043 "allow_accel_sequence": false, 00:18:13.043 "rdma_max_cq_size": 0, 00:18:13.043 "rdma_cm_event_timeout_ms": 0, 00:18:13.043 "dhchap_digests": [ 00:18:13.043 "sha256", 00:18:13.043 "sha384", 00:18:13.043 "sha512" 00:18:13.043 ], 00:18:13.043 "dhchap_dhgroups": [ 00:18:13.043 "null", 00:18:13.043 "ffdhe2048", 00:18:13.043 "ffdhe3072", 00:18:13.043 "ffdhe4096", 00:18:13.043 "ffdhe6144", 00:18:13.043 "ffdhe8192" 00:18:13.043 ] 00:18:13.043 } 00:18:13.043 }, 00:18:13.043 { 00:18:13.043 "method": "bdev_nvme_set_hotplug", 00:18:13.043 "params": { 00:18:13.043 "period_us": 100000, 00:18:13.043 "enable": false 00:18:13.043 } 00:18:13.043 }, 00:18:13.043 { 00:18:13.043 "method": "bdev_malloc_create", 00:18:13.043 "params": { 00:18:13.043 "name": "malloc0", 00:18:13.043 "num_blocks": 8192, 00:18:13.043 "block_size": 4096, 00:18:13.043 "physical_block_size": 4096, 00:18:13.043 "uuid": "9d84f5b6-3deb-4d9b-a3c7-ac4f79187725", 00:18:13.043 "optimal_io_boundary": 0, 00:18:13.043 "md_size": 0, 00:18:13.043 "dif_type": 0, 00:18:13.043 "dif_is_head_of_md": false, 00:18:13.043 "dif_pi_format": 0 00:18:13.043 } 00:18:13.043 }, 00:18:13.043 { 00:18:13.043 "method": "bdev_wait_for_examine" 00:18:13.043 } 00:18:13.043 ] 00:18:13.043 }, 00:18:13.043 { 00:18:13.043 "subsystem": "nbd", 00:18:13.043 "config": [] 00:18:13.043 }, 00:18:13.043 { 00:18:13.043 "subsystem": "scheduler", 00:18:13.043 "config": [ 00:18:13.043 { 00:18:13.043 "method": "framework_set_scheduler", 00:18:13.043 "params": { 00:18:13.043 "name": "static" 00:18:13.043 } 00:18:13.043 } 00:18:13.043 ] 00:18:13.043 }, 00:18:13.043 { 00:18:13.043 "subsystem": "nvmf", 00:18:13.043 "config": [ 00:18:13.043 { 00:18:13.043 "method": "nvmf_set_config", 00:18:13.043 "params": { 00:18:13.043 "discovery_filter": "match_any", 00:18:13.043 "admin_cmd_passthru": { 00:18:13.043 "identify_ctrlr": false 00:18:13.043 }, 00:18:13.043 "dhchap_digests": [ 00:18:13.043 "sha256", 00:18:13.043 "sha384", 00:18:13.043 "sha512" 00:18:13.043 ], 00:18:13.043 "dhchap_dhgroups": [ 00:18:13.043 "null", 00:18:13.043 "ffdhe2048", 00:18:13.043 "ffdhe3072", 00:18:13.043 "ffdhe4096", 00:18:13.043 "ffdhe6144", 00:18:13.043 "ffdhe8192" 00:18:13.043 ] 00:18:13.043 } 00:18:13.043 }, 00:18:13.043 { 00:18:13.043 "method": "nvmf_set_max_subsystems", 00:18:13.043 "params": { 00:18:13.043 "max_subsystems": 1024 00:18:13.043 } 00:18:13.043 }, 00:18:13.043 { 00:18:13.043 "method": "nvmf_set_crdt", 00:18:13.043 "params": { 00:18:13.043 "crdt1": 0, 00:18:13.043 "crdt2": 0, 00:18:13.043 "crdt3": 0 00:18:13.043 } 00:18:13.043 }, 00:18:13.043 { 00:18:13.043 "method": "nvmf_create_transport", 00:18:13.043 "params": { 00:18:13.043 "trtype": "TCP", 00:18:13.043 "max_queue_depth": 128, 00:18:13.043 "max_io_qpairs_per_ctrlr": 127, 00:18:13.043 "in_capsule_data_size": 4096, 00:18:13.043 "max_io_size": 131072, 00:18:13.043 "io_unit_size": 131072, 00:18:13.043 "max_aq_depth": 128, 00:18:13.043 "num_shared_buffers": 511, 00:18:13.043 "buf_cache_size": 4294967295, 00:18:13.043 "dif_insert_or_strip": false, 00:18:13.043 "zcopy": false, 00:18:13.043 "c2h_success": false, 00:18:13.043 "sock_priority": 0, 00:18:13.043 "abort_timeout_sec": 1, 00:18:13.043 "ack_timeout": 0, 00:18:13.043 "data_wr_pool_size": 0 00:18:13.043 } 00:18:13.043 }, 00:18:13.043 { 00:18:13.043 "method": "nvmf_create_subsystem", 00:18:13.043 "params": { 00:18:13.043 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.043 "allow_any_host": false, 00:18:13.043 "serial_number": "00000000000000000000", 00:18:13.043 "model_number": "SPDK bdev Controller", 00:18:13.043 "max_namespaces": 32, 00:18:13.043 "min_cntlid": 1, 00:18:13.043 "max_cntlid": 65519, 00:18:13.043 "ana_reporting": false 00:18:13.043 } 00:18:13.043 }, 00:18:13.043 { 00:18:13.043 "method": "nvmf_subsystem_add_host", 00:18:13.043 "params": { 00:18:13.043 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.043 "host": "nqn.2016-06.io.spdk:host1", 00:18:13.043 "psk": "key0" 00:18:13.043 } 00:18:13.043 }, 00:18:13.043 { 00:18:13.043 "method": "nvmf_subsystem_add_ns", 00:18:13.043 "params": { 00:18:13.043 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.043 "namespace": { 00:18:13.043 "nsid": 1, 00:18:13.043 "bdev_name": "malloc0", 00:18:13.043 "nguid": "9D84F5B63DEB4D9BA3C7AC4F79187725", 00:18:13.043 "uuid": "9d84f5b6-3deb-4d9b-a3c7-ac4f79187725", 00:18:13.043 "no_auto_visible": false 00:18:13.043 } 00:18:13.043 } 00:18:13.043 }, 00:18:13.043 { 00:18:13.043 "method": "nvmf_subsystem_add_listener", 00:18:13.043 "params": { 00:18:13.043 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.043 "listen_address": { 00:18:13.043 "trtype": "TCP", 00:18:13.043 "adrfam": "IPv4", 00:18:13.043 "traddr": "10.0.0.2", 00:18:13.043 "trsvcid": "4420" 00:18:13.043 }, 00:18:13.043 "secure_channel": false, 00:18:13.043 "sock_impl": "ssl" 00:18:13.043 } 00:18:13.043 } 00:18:13.043 ] 00:18:13.043 } 00:18:13.043 ] 00:18:13.043 }' 00:18:13.043 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:13.302 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:13.302 "subsystems": [ 00:18:13.302 { 00:18:13.302 "subsystem": "keyring", 00:18:13.302 "config": [ 00:18:13.302 { 00:18:13.302 "method": "keyring_file_add_key", 00:18:13.302 "params": { 00:18:13.302 "name": "key0", 00:18:13.302 "path": "/tmp/tmp.O7Sx0Vvrdq" 00:18:13.302 } 00:18:13.302 } 00:18:13.302 ] 00:18:13.302 }, 00:18:13.302 { 00:18:13.302 "subsystem": "iobuf", 00:18:13.302 "config": [ 00:18:13.302 { 00:18:13.302 "method": "iobuf_set_options", 00:18:13.302 "params": { 00:18:13.302 "small_pool_count": 8192, 00:18:13.302 "large_pool_count": 1024, 00:18:13.302 "small_bufsize": 8192, 00:18:13.302 "large_bufsize": 135168, 00:18:13.302 "enable_numa": false 00:18:13.302 } 00:18:13.302 } 00:18:13.302 ] 00:18:13.302 }, 00:18:13.302 { 00:18:13.302 "subsystem": "sock", 00:18:13.302 "config": [ 00:18:13.302 { 00:18:13.302 "method": "sock_set_default_impl", 00:18:13.302 "params": { 00:18:13.302 "impl_name": "posix" 00:18:13.302 } 00:18:13.302 }, 00:18:13.302 { 00:18:13.302 "method": "sock_impl_set_options", 00:18:13.302 "params": { 00:18:13.302 "impl_name": "ssl", 00:18:13.302 "recv_buf_size": 4096, 00:18:13.302 "send_buf_size": 4096, 00:18:13.302 "enable_recv_pipe": true, 00:18:13.302 "enable_quickack": false, 00:18:13.302 "enable_placement_id": 0, 00:18:13.302 "enable_zerocopy_send_server": true, 00:18:13.302 "enable_zerocopy_send_client": false, 00:18:13.302 "zerocopy_threshold": 0, 00:18:13.302 "tls_version": 0, 00:18:13.302 "enable_ktls": false 00:18:13.302 } 00:18:13.302 }, 00:18:13.302 { 00:18:13.302 "method": "sock_impl_set_options", 00:18:13.302 "params": { 00:18:13.302 "impl_name": "posix", 00:18:13.302 "recv_buf_size": 2097152, 00:18:13.302 "send_buf_size": 2097152, 00:18:13.302 "enable_recv_pipe": true, 00:18:13.302 "enable_quickack": false, 00:18:13.302 "enable_placement_id": 0, 00:18:13.302 "enable_zerocopy_send_server": true, 00:18:13.302 "enable_zerocopy_send_client": false, 00:18:13.302 "zerocopy_threshold": 0, 00:18:13.302 "tls_version": 0, 00:18:13.302 "enable_ktls": false 00:18:13.302 } 00:18:13.302 } 00:18:13.302 ] 00:18:13.302 }, 00:18:13.302 { 00:18:13.302 "subsystem": "vmd", 00:18:13.302 "config": [] 00:18:13.302 }, 00:18:13.302 { 00:18:13.302 "subsystem": "accel", 00:18:13.302 "config": [ 00:18:13.302 { 00:18:13.302 "method": "accel_set_options", 00:18:13.302 "params": { 00:18:13.302 "small_cache_size": 128, 00:18:13.302 "large_cache_size": 16, 00:18:13.302 "task_count": 2048, 00:18:13.302 "sequence_count": 2048, 00:18:13.302 "buf_count": 2048 00:18:13.302 } 00:18:13.302 } 00:18:13.302 ] 00:18:13.302 }, 00:18:13.302 { 00:18:13.302 "subsystem": "bdev", 00:18:13.302 "config": [ 00:18:13.302 { 00:18:13.302 "method": "bdev_set_options", 00:18:13.302 "params": { 00:18:13.302 "bdev_io_pool_size": 65535, 00:18:13.302 "bdev_io_cache_size": 256, 00:18:13.302 "bdev_auto_examine": true, 00:18:13.302 "iobuf_small_cache_size": 128, 00:18:13.302 "iobuf_large_cache_size": 16 00:18:13.302 } 00:18:13.302 }, 00:18:13.302 { 00:18:13.302 "method": "bdev_raid_set_options", 00:18:13.302 "params": { 00:18:13.302 "process_window_size_kb": 1024, 00:18:13.302 "process_max_bandwidth_mb_sec": 0 00:18:13.302 } 00:18:13.302 }, 00:18:13.302 { 00:18:13.302 "method": "bdev_iscsi_set_options", 00:18:13.302 "params": { 00:18:13.302 "timeout_sec": 30 00:18:13.302 } 00:18:13.302 }, 00:18:13.302 { 00:18:13.302 "method": "bdev_nvme_set_options", 00:18:13.302 "params": { 00:18:13.302 "action_on_timeout": "none", 00:18:13.302 "timeout_us": 0, 00:18:13.302 "timeout_admin_us": 0, 00:18:13.302 "keep_alive_timeout_ms": 10000, 00:18:13.302 "arbitration_burst": 0, 00:18:13.302 "low_priority_weight": 0, 00:18:13.302 "medium_priority_weight": 0, 00:18:13.302 "high_priority_weight": 0, 00:18:13.302 "nvme_adminq_poll_period_us": 10000, 00:18:13.302 "nvme_ioq_poll_period_us": 0, 00:18:13.302 "io_queue_requests": 512, 00:18:13.302 "delay_cmd_submit": true, 00:18:13.302 "transport_retry_count": 4, 00:18:13.302 "bdev_retry_count": 3, 00:18:13.302 "transport_ack_timeout": 0, 00:18:13.302 "ctrlr_loss_timeout_sec": 0, 00:18:13.302 "reconnect_delay_sec": 0, 00:18:13.302 "fast_io_fail_timeout_sec": 0, 00:18:13.303 "disable_auto_failback": false, 00:18:13.303 "generate_uuids": false, 00:18:13.303 "transport_tos": 0, 00:18:13.303 "nvme_error_stat": false, 00:18:13.303 "rdma_srq_size": 0, 00:18:13.303 "io_path_stat": false, 00:18:13.303 "allow_accel_sequence": false, 00:18:13.303 "rdma_max_cq_size": 0, 00:18:13.303 "rdma_cm_event_timeout_ms": 0, 00:18:13.303 "dhchap_digests": [ 00:18:13.303 "sha256", 00:18:13.303 "sha384", 00:18:13.303 "sha512" 00:18:13.303 ], 00:18:13.303 "dhchap_dhgroups": [ 00:18:13.303 "null", 00:18:13.303 "ffdhe2048", 00:18:13.303 "ffdhe3072", 00:18:13.303 "ffdhe4096", 00:18:13.303 "ffdhe6144", 00:18:13.303 "ffdhe8192" 00:18:13.303 ] 00:18:13.303 } 00:18:13.303 }, 00:18:13.303 { 00:18:13.303 "method": "bdev_nvme_attach_controller", 00:18:13.303 "params": { 00:18:13.303 "name": "nvme0", 00:18:13.303 "trtype": "TCP", 00:18:13.303 "adrfam": "IPv4", 00:18:13.303 "traddr": "10.0.0.2", 00:18:13.303 "trsvcid": "4420", 00:18:13.303 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.303 "prchk_reftag": false, 00:18:13.303 "prchk_guard": false, 00:18:13.303 "ctrlr_loss_timeout_sec": 0, 00:18:13.303 "reconnect_delay_sec": 0, 00:18:13.303 "fast_io_fail_timeout_sec": 0, 00:18:13.303 "psk": "key0", 00:18:13.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.303 "hdgst": false, 00:18:13.303 "ddgst": false, 00:18:13.303 "multipath": "multipath" 00:18:13.303 } 00:18:13.303 }, 00:18:13.303 { 00:18:13.303 "method": "bdev_nvme_set_hotplug", 00:18:13.303 "params": { 00:18:13.303 "period_us": 100000, 00:18:13.303 "enable": false 00:18:13.303 } 00:18:13.303 }, 00:18:13.303 { 00:18:13.303 "method": "bdev_enable_histogram", 00:18:13.303 "params": { 00:18:13.303 "name": "nvme0n1", 00:18:13.303 "enable": true 00:18:13.303 } 00:18:13.303 }, 00:18:13.303 { 00:18:13.303 "method": "bdev_wait_for_examine" 00:18:13.303 } 00:18:13.303 ] 00:18:13.303 }, 00:18:13.303 { 00:18:13.303 "subsystem": "nbd", 00:18:13.303 "config": [] 00:18:13.303 } 00:18:13.303 ] 00:18:13.303 }' 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3346975 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3346975 ']' 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3346975 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3346975 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3346975' 00:18:13.303 killing process with pid 3346975 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3346975 00:18:13.303 Received shutdown signal, test time was about 1.000000 seconds 00:18:13.303 00:18:13.303 Latency(us) 00:18:13.303 [2024-12-13T08:29:25.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.303 [2024-12-13T08:29:25.669Z] =================================================================================================================== 00:18:13.303 [2024-12-13T08:29:25.669Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3346975 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3346754 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3346754 ']' 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3346754 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.303 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3346754 00:18:13.562 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.562 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.562 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3346754' 00:18:13.562 killing process with pid 3346754 00:18:13.562 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3346754 00:18:13.562 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3346754 00:18:13.562 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:13.562 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:13.562 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:13.562 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:13.562 "subsystems": [ 00:18:13.562 { 00:18:13.562 "subsystem": "keyring", 00:18:13.562 "config": [ 00:18:13.562 { 00:18:13.562 "method": "keyring_file_add_key", 00:18:13.562 "params": { 00:18:13.562 "name": "key0", 00:18:13.562 "path": "/tmp/tmp.O7Sx0Vvrdq" 00:18:13.562 } 00:18:13.562 } 00:18:13.562 ] 00:18:13.562 }, 00:18:13.562 { 00:18:13.562 "subsystem": "iobuf", 00:18:13.562 "config": [ 00:18:13.562 { 00:18:13.562 "method": "iobuf_set_options", 00:18:13.562 "params": { 00:18:13.562 "small_pool_count": 8192, 00:18:13.562 "large_pool_count": 1024, 00:18:13.562 "small_bufsize": 8192, 00:18:13.562 "large_bufsize": 135168, 00:18:13.562 "enable_numa": false 00:18:13.562 } 00:18:13.562 } 00:18:13.562 ] 00:18:13.562 }, 00:18:13.562 { 00:18:13.562 "subsystem": "sock", 00:18:13.562 "config": [ 00:18:13.562 { 00:18:13.562 "method": "sock_set_default_impl", 00:18:13.562 "params": { 00:18:13.562 "impl_name": "posix" 00:18:13.562 } 00:18:13.562 }, 00:18:13.562 { 00:18:13.562 "method": "sock_impl_set_options", 00:18:13.562 "params": { 00:18:13.562 "impl_name": "ssl", 00:18:13.562 "recv_buf_size": 4096, 00:18:13.562 "send_buf_size": 4096, 00:18:13.562 "enable_recv_pipe": true, 00:18:13.562 "enable_quickack": false, 00:18:13.562 "enable_placement_id": 0, 00:18:13.562 "enable_zerocopy_send_server": true, 00:18:13.562 "enable_zerocopy_send_client": false, 00:18:13.562 "zerocopy_threshold": 0, 00:18:13.562 "tls_version": 0, 00:18:13.562 "enable_ktls": false 00:18:13.562 } 00:18:13.562 }, 00:18:13.562 { 00:18:13.562 "method": "sock_impl_set_options", 00:18:13.562 "params": { 00:18:13.562 "impl_name": "posix", 00:18:13.562 "recv_buf_size": 2097152, 00:18:13.562 "send_buf_size": 2097152, 00:18:13.562 "enable_recv_pipe": true, 00:18:13.562 "enable_quickack": false, 00:18:13.562 "enable_placement_id": 0, 00:18:13.562 "enable_zerocopy_send_server": true, 00:18:13.562 "enable_zerocopy_send_client": false, 00:18:13.562 "zerocopy_threshold": 0, 00:18:13.562 "tls_version": 0, 00:18:13.562 "enable_ktls": false 00:18:13.562 } 00:18:13.562 } 00:18:13.562 ] 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "subsystem": "vmd", 00:18:13.563 "config": [] 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "subsystem": "accel", 00:18:13.563 "config": [ 00:18:13.563 { 00:18:13.563 "method": "accel_set_options", 00:18:13.563 "params": { 00:18:13.563 "small_cache_size": 128, 00:18:13.563 "large_cache_size": 16, 00:18:13.563 "task_count": 2048, 00:18:13.563 "sequence_count": 2048, 00:18:13.563 "buf_count": 2048 00:18:13.563 } 00:18:13.563 } 00:18:13.563 ] 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "subsystem": "bdev", 00:18:13.563 "config": [ 00:18:13.563 { 00:18:13.563 "method": "bdev_set_options", 00:18:13.563 "params": { 00:18:13.563 "bdev_io_pool_size": 65535, 00:18:13.563 "bdev_io_cache_size": 256, 00:18:13.563 "bdev_auto_examine": true, 00:18:13.563 "iobuf_small_cache_size": 128, 00:18:13.563 "iobuf_large_cache_size": 16 00:18:13.563 } 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "method": "bdev_raid_set_options", 00:18:13.563 "params": { 00:18:13.563 "process_window_size_kb": 1024, 00:18:13.563 "process_max_bandwidth_mb_sec": 0 00:18:13.563 } 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "method": "bdev_iscsi_set_options", 00:18:13.563 "params": { 00:18:13.563 "timeout_sec": 30 00:18:13.563 } 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "method": "bdev_nvme_set_options", 00:18:13.563 "params": { 00:18:13.563 "action_on_timeout": "none", 00:18:13.563 "timeout_us": 0, 00:18:13.563 "timeout_admin_us": 0, 00:18:13.563 "keep_alive_timeout_ms": 10000, 00:18:13.563 "arbitration_burst": 0, 00:18:13.563 "low_priority_weight": 0, 00:18:13.563 "medium_priority_weight": 0, 00:18:13.563 "high_priority_weight": 0, 00:18:13.563 "nvme_adminq_poll_period_us": 10000, 00:18:13.563 "nvme_ioq_poll_period_us": 0, 00:18:13.563 "io_queue_requests": 0, 00:18:13.563 "delay_cmd_submit": true, 00:18:13.563 "transport_retry_count": 4, 00:18:13.563 "bdev_retry_count": 3, 00:18:13.563 "transport_ack_timeout": 0, 00:18:13.563 "ctrlr_loss_timeout_sec": 0, 00:18:13.563 "reconnect_delay_sec": 0, 00:18:13.563 "fast_io_fail_timeout_sec": 0, 00:18:13.563 "disable_auto_failback": false, 00:18:13.563 "generate_uuids": false, 00:18:13.563 "transport_tos": 0, 00:18:13.563 "nvme_error_stat": false, 00:18:13.563 "rdma_srq_size": 0, 00:18:13.563 "io_path_stat": false, 00:18:13.563 "allow_accel_sequence": false, 00:18:13.563 "rdma_max_cq_size": 0, 00:18:13.563 "rdma_cm_event_timeout_ms": 0, 00:18:13.563 "dhchap_digests": [ 00:18:13.563 "sha256", 00:18:13.563 "sha384", 00:18:13.563 "sha512" 00:18:13.563 ], 00:18:13.563 "dhchap_dhgroups": [ 00:18:13.563 "null", 00:18:13.563 "ffdhe2048", 00:18:13.563 "ffdhe3072", 00:18:13.563 "ffdhe4096", 00:18:13.563 "ffdhe6144", 00:18:13.563 "ffdhe8192" 00:18:13.563 ] 00:18:13.563 } 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "method": "bdev_nvme_set_hotplug", 00:18:13.563 "params": { 00:18:13.563 "period_us": 100000, 00:18:13.563 "enable": false 00:18:13.563 } 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "method": "bdev_malloc_create", 00:18:13.563 "params": { 00:18:13.563 "name": "malloc0", 00:18:13.563 "num_blocks": 8192, 00:18:13.563 "block_size": 4096, 00:18:13.563 "physical_block_size": 4096, 00:18:13.563 "uuid": "9d84f5b6-3deb-4d9b-a3c7-ac4f79187725", 00:18:13.563 "optimal_io_boundary": 0, 00:18:13.563 "md_size": 0, 00:18:13.563 "dif_type": 0, 00:18:13.563 "dif_is_head_of_md": false, 00:18:13.563 "dif_pi_format": 0 00:18:13.563 } 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "method": "bdev_wait_for_examine" 00:18:13.563 } 00:18:13.563 ] 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "subsystem": "nbd", 00:18:13.563 "config": [] 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "subsystem": "scheduler", 00:18:13.563 "config": [ 00:18:13.563 { 00:18:13.563 "method": "framework_set_scheduler", 00:18:13.563 "params": { 00:18:13.563 "name": "static" 00:18:13.563 } 00:18:13.563 } 00:18:13.563 ] 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "subsystem": "nvmf", 00:18:13.563 "config": [ 00:18:13.563 { 00:18:13.563 "method": "nvmf_set_config", 00:18:13.563 "params": { 00:18:13.563 "discovery_filter": "match_any", 00:18:13.563 "admin_cmd_passthru": { 00:18:13.563 "identify_ctrlr": false 00:18:13.563 }, 00:18:13.563 "dhchap_digests": [ 00:18:13.563 "sha256", 00:18:13.563 "sha384", 00:18:13.563 "sha512" 00:18:13.563 ], 00:18:13.563 "dhchap_dhgroups": [ 00:18:13.563 "null", 00:18:13.563 "ffdhe2048", 00:18:13.563 "ffdhe3072", 00:18:13.563 "ffdhe4096", 00:18:13.563 "ffdhe6144", 00:18:13.563 "ffdhe8192" 00:18:13.563 ] 00:18:13.563 } 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "method": "nvmf_set_max_subsystems", 00:18:13.563 "params": { 00:18:13.563 "max_subsystems": 1024 00:18:13.563 } 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "method": "nvmf_set_crdt", 00:18:13.563 "params": { 00:18:13.563 "crdt1": 0, 00:18:13.563 "crdt2": 0, 00:18:13.563 "crdt3": 0 00:18:13.563 } 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "method": "nvmf_create_transport", 00:18:13.563 "params": { 00:18:13.563 "trtype": "TCP", 00:18:13.563 "max_queue_depth": 128, 00:18:13.563 "max_io_qpairs_per_ctrlr": 127, 00:18:13.563 "in_capsule_data_size": 4096, 00:18:13.563 "max_io_size": 131072, 00:18:13.563 "io_unit_size": 131072, 00:18:13.563 "max_aq_depth": 128, 00:18:13.563 "num_shared_buffers": 511, 00:18:13.563 "buf_cache_size": 4294967295, 00:18:13.563 "dif_insert_or_strip": false, 00:18:13.563 "zcopy": false, 00:18:13.563 "c2h_success": false, 00:18:13.563 "sock_priority": 0, 00:18:13.563 "abort_timeout_sec": 1, 00:18:13.563 "ack_timeout": 0, 00:18:13.563 "data_wr_pool_size": 0 00:18:13.563 } 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "method": "nvmf_create_subsystem", 00:18:13.563 "params": { 00:18:13.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.563 "allow_any_host": false, 00:18:13.563 "serial_number": "00000000000000000000", 00:18:13.563 "model_number": "SPDK bdev Controller", 00:18:13.563 "max_namespaces": 32, 00:18:13.563 "min_cntlid": 1, 00:18:13.563 "max_cntlid": 65519, 00:18:13.563 "ana_reporting": false 00:18:13.563 } 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "method": "nvmf_subsystem_add_host", 00:18:13.563 "params": { 00:18:13.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.563 "host": "nqn.2016-06.io.spdk:host1", 00:18:13.563 "psk": "key0" 00:18:13.563 } 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "method": "nvmf_subsystem_add_ns", 00:18:13.563 "params": { 00:18:13.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.563 "namespace": { 00:18:13.563 "nsid": 1, 00:18:13.563 "bdev_name": "malloc0", 00:18:13.563 "nguid": "9D84F5B63DEB4D9BA3C7AC4F79187725", 00:18:13.563 "uuid": "9d84f5b6-3deb-4d9b-a3c7-ac4f79187725", 00:18:13.563 "no_auto_visible": false 00:18:13.563 } 00:18:13.563 } 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "method": "nvmf_subsystem_add_listener", 00:18:13.563 "params": { 00:18:13.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.563 "listen_address": { 00:18:13.563 "trtype": "TCP", 00:18:13.563 "adrfam": "IPv4", 00:18:13.563 "traddr": "10.0.0.2", 00:18:13.564 "trsvcid": "4420" 00:18:13.564 }, 00:18:13.564 "secure_channel": false, 00:18:13.564 "sock_impl": "ssl" 00:18:13.564 } 00:18:13.564 } 00:18:13.564 ] 00:18:13.564 } 00:18:13.564 ] 00:18:13.564 }' 00:18:13.564 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.564 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3347412 00:18:13.564 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:13.564 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3347412 00:18:13.564 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3347412 ']' 00:18:13.564 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.564 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.564 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.564 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.564 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.564 [2024-12-13 09:29:25.894204] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:13.564 [2024-12-13 09:29:25.894251] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.822 [2024-12-13 09:29:25.958141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.822 [2024-12-13 09:29:25.995147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.822 [2024-12-13 09:29:25.995184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.822 [2024-12-13 09:29:25.995191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.822 [2024-12-13 09:29:25.995197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.822 [2024-12-13 09:29:25.995201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.822 [2024-12-13 09:29:25.995731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.080 [2024-12-13 09:29:26.208184] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.080 [2024-12-13 09:29:26.240212] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:14.080 [2024-12-13 09:29:26.240421] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.647 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.647 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.647 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:14.647 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.647 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.647 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.647 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3347479 00:18:14.647 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3347479 /var/tmp/bdevperf.sock 00:18:14.647 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3347479 ']' 00:18:14.647 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.647 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:14.647 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.647 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.647 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:14.647 "subsystems": [ 00:18:14.647 { 00:18:14.647 "subsystem": "keyring", 00:18:14.647 "config": [ 00:18:14.647 { 00:18:14.647 "method": "keyring_file_add_key", 00:18:14.647 "params": { 00:18:14.647 "name": "key0", 00:18:14.647 "path": "/tmp/tmp.O7Sx0Vvrdq" 00:18:14.647 } 00:18:14.647 } 00:18:14.647 ] 00:18:14.647 }, 00:18:14.647 { 00:18:14.647 "subsystem": "iobuf", 00:18:14.647 "config": [ 00:18:14.647 { 00:18:14.647 "method": "iobuf_set_options", 00:18:14.647 "params": { 00:18:14.647 "small_pool_count": 8192, 00:18:14.647 "large_pool_count": 1024, 00:18:14.647 "small_bufsize": 8192, 00:18:14.647 "large_bufsize": 135168, 00:18:14.647 "enable_numa": false 00:18:14.647 } 00:18:14.647 } 00:18:14.647 ] 00:18:14.647 }, 00:18:14.647 { 00:18:14.647 "subsystem": "sock", 00:18:14.647 "config": [ 00:18:14.647 { 00:18:14.647 "method": "sock_set_default_impl", 00:18:14.647 "params": { 00:18:14.647 "impl_name": "posix" 00:18:14.647 } 00:18:14.647 }, 00:18:14.647 { 00:18:14.647 "method": "sock_impl_set_options", 00:18:14.647 "params": { 00:18:14.647 "impl_name": "ssl", 00:18:14.647 "recv_buf_size": 4096, 00:18:14.647 "send_buf_size": 4096, 00:18:14.647 "enable_recv_pipe": true, 00:18:14.647 "enable_quickack": false, 00:18:14.647 "enable_placement_id": 0, 00:18:14.647 "enable_zerocopy_send_server": true, 00:18:14.647 "enable_zerocopy_send_client": false, 00:18:14.647 "zerocopy_threshold": 0, 00:18:14.647 "tls_version": 0, 00:18:14.647 "enable_ktls": false 00:18:14.647 } 00:18:14.647 }, 00:18:14.647 { 00:18:14.647 "method": "sock_impl_set_options", 00:18:14.647 "params": { 00:18:14.647 "impl_name": "posix", 00:18:14.647 "recv_buf_size": 2097152, 00:18:14.647 "send_buf_size": 2097152, 00:18:14.647 "enable_recv_pipe": true, 00:18:14.647 "enable_quickack": false, 00:18:14.647 "enable_placement_id": 0, 00:18:14.647 "enable_zerocopy_send_server": true, 00:18:14.647 "enable_zerocopy_send_client": false, 00:18:14.647 "zerocopy_threshold": 0, 00:18:14.647 "tls_version": 0, 00:18:14.647 "enable_ktls": false 00:18:14.647 } 00:18:14.647 } 00:18:14.647 ] 00:18:14.647 }, 00:18:14.647 { 00:18:14.647 "subsystem": "vmd", 00:18:14.647 "config": [] 00:18:14.647 }, 00:18:14.647 { 00:18:14.647 "subsystem": "accel", 00:18:14.647 "config": [ 00:18:14.647 { 00:18:14.647 "method": "accel_set_options", 00:18:14.647 "params": { 00:18:14.647 "small_cache_size": 128, 00:18:14.647 "large_cache_size": 16, 00:18:14.647 "task_count": 2048, 00:18:14.647 "sequence_count": 2048, 00:18:14.647 "buf_count": 2048 00:18:14.647 } 00:18:14.647 } 00:18:14.647 ] 00:18:14.647 }, 00:18:14.647 { 00:18:14.647 "subsystem": "bdev", 00:18:14.647 "config": [ 00:18:14.647 { 00:18:14.647 "method": "bdev_set_options", 00:18:14.647 "params": { 00:18:14.647 "bdev_io_pool_size": 65535, 00:18:14.647 "bdev_io_cache_size": 256, 00:18:14.647 "bdev_auto_examine": true, 00:18:14.647 "iobuf_small_cache_size": 128, 00:18:14.647 "iobuf_large_cache_size": 16 00:18:14.647 } 00:18:14.647 }, 00:18:14.647 { 00:18:14.647 "method": "bdev_raid_set_options", 00:18:14.647 "params": { 00:18:14.647 "process_window_size_kb": 1024, 00:18:14.647 "process_max_bandwidth_mb_sec": 0 00:18:14.647 } 00:18:14.647 }, 00:18:14.647 { 00:18:14.647 "method": "bdev_iscsi_set_options", 00:18:14.647 "params": { 00:18:14.647 "timeout_sec": 30 00:18:14.647 } 00:18:14.647 }, 00:18:14.647 { 00:18:14.647 "method": "bdev_nvme_set_options", 00:18:14.647 "params": { 00:18:14.647 "action_on_timeout": "none", 00:18:14.647 "timeout_us": 0, 00:18:14.647 "timeout_admin_us": 0, 00:18:14.647 "keep_alive_timeout_ms": 10000, 00:18:14.647 "arbitration_burst": 0, 00:18:14.647 "low_priority_weight": 0, 00:18:14.648 "medium_priority_weight": 0, 00:18:14.648 "high_priority_weight": 0, 00:18:14.648 "nvme_adminq_poll_period_us": 10000, 00:18:14.648 "nvme_ioq_poll_period_us": 0, 00:18:14.648 "io_queue_requests": 512, 00:18:14.648 "delay_cmd_submit": true, 00:18:14.648 "transport_retry_count": 4, 00:18:14.648 "bdev_retry_count": 3, 00:18:14.648 "transport_ack_timeout": 0, 00:18:14.648 "ctrlr_loss_timeout_sec": 0, 00:18:14.648 "reconnect_delay_sec": 0, 00:18:14.648 "fast_io_fail_timeout_sec": 0, 00:18:14.648 "disable_auto_failback": false, 00:18:14.648 "generate_uuids": false, 00:18:14.648 "transport_tos": 0, 00:18:14.648 "nvme_error_stat": false, 00:18:14.648 "rdma_srq_size": 0, 00:18:14.648 "io_path_stat": false, 00:18:14.648 "allow_accel_sequence": false, 00:18:14.648 "rdma_max_cq_size": 0, 00:18:14.648 "rdma_cm_event_timeout_ms": 0, 00:18:14.648 "dhchap_digests": [ 00:18:14.648 "sha256", 00:18:14.648 "sha384", 00:18:14.648 "sha512" 00:18:14.648 ], 00:18:14.648 "dhchap_dhgroups": [ 00:18:14.648 "null", 00:18:14.648 "ffdhe2048", 00:18:14.648 "ffdhe3072", 00:18:14.648 "ffdhe4096", 00:18:14.648 "ffdhe6144", 00:18:14.648 "ffdhe8192" 00:18:14.648 ] 00:18:14.648 } 00:18:14.648 }, 00:18:14.648 { 00:18:14.648 "method": "bdev_nvme_attach_controller", 00:18:14.648 "params": { 00:18:14.648 "name": "nvme0", 00:18:14.648 "trtype": "TCP", 00:18:14.648 "adrfam": "IPv4", 00:18:14.648 "traddr": "10.0.0.2", 00:18:14.648 "trsvcid": "4420", 00:18:14.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.648 "prchk_reftag": false, 00:18:14.648 "prchk_guard": false, 00:18:14.648 "ctrlr_loss_timeout_sec": 0, 00:18:14.648 "reconnect_delay_sec": 0, 00:18:14.648 "fast_io_fail_timeout_sec": 0, 00:18:14.648 "psk": "key0", 00:18:14.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.648 "hdgst": false, 00:18:14.648 "ddgst": false, 00:18:14.648 "multipath": "multipath" 00:18:14.648 } 00:18:14.648 }, 00:18:14.648 { 00:18:14.648 "method": "bdev_nvme_set_hotplug", 00:18:14.648 "params": { 00:18:14.648 "period_us": 100000, 00:18:14.648 "enable": false 00:18:14.648 } 00:18:14.648 }, 00:18:14.648 { 00:18:14.648 "method": "bdev_enable_histogram", 00:18:14.648 "params": { 00:18:14.648 "name": "nvme0n1", 00:18:14.648 "enable": true 00:18:14.648 } 00:18:14.648 }, 00:18:14.648 { 00:18:14.648 "method": "bdev_wait_for_examine" 00:18:14.648 } 00:18:14.648 ] 00:18:14.648 }, 00:18:14.648 { 00:18:14.648 "subsystem": "nbd", 00:18:14.648 "config": [] 00:18:14.648 } 00:18:14.648 ] 00:18:14.648 }' 00:18:14.648 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.648 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.648 [2024-12-13 09:29:26.808353] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:14.648 [2024-12-13 09:29:26.808400] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347479 ] 00:18:14.648 [2024-12-13 09:29:26.871915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.648 [2024-12-13 09:29:26.912020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.906 [2024-12-13 09:29:27.065835] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.472 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.472 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:15.472 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:15.472 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:15.472 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.472 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:15.730 Running I/O for 1 seconds... 00:18:16.664 5373.00 IOPS, 20.99 MiB/s 00:18:16.664 Latency(us) 00:18:16.664 [2024-12-13T08:29:29.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.664 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:16.664 Verification LBA range: start 0x0 length 0x2000 00:18:16.664 nvme0n1 : 1.02 5412.22 21.14 0.00 0.00 23476.07 4899.60 24841.26 00:18:16.664 [2024-12-13T08:29:29.030Z] =================================================================================================================== 00:18:16.664 [2024-12-13T08:29:29.030Z] Total : 5412.22 21.14 0.00 0.00 23476.07 4899.60 24841.26 00:18:16.664 { 00:18:16.664 "results": [ 00:18:16.664 { 00:18:16.664 "job": "nvme0n1", 00:18:16.664 "core_mask": "0x2", 00:18:16.664 "workload": "verify", 00:18:16.664 "status": "finished", 00:18:16.664 "verify_range": { 00:18:16.664 "start": 0, 00:18:16.664 "length": 8192 00:18:16.664 }, 00:18:16.664 "queue_depth": 128, 00:18:16.664 "io_size": 4096, 00:18:16.664 "runtime": 1.016403, 00:18:16.664 "iops": 5412.223301190571, 00:18:16.664 "mibps": 21.141497270275668, 00:18:16.664 "io_failed": 0, 00:18:16.664 "io_timeout": 0, 00:18:16.664 "avg_latency_us": 23476.06616424719, 00:18:16.664 "min_latency_us": 4899.596190476191, 00:18:16.664 "max_latency_us": 24841.26476190476 00:18:16.664 } 00:18:16.664 ], 00:18:16.664 "core_count": 1 00:18:16.664 } 00:18:16.664 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:16.664 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:16.664 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:16.664 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:16.664 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:16.664 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:16.664 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:16.664 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:16.664 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:16.664 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:16.664 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:16.664 nvmf_trace.0 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3347479 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3347479 ']' 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3347479 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3347479 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3347479' 00:18:16.923 killing process with pid 3347479 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3347479 00:18:16.923 Received shutdown signal, test time was about 1.000000 seconds 00:18:16.923 00:18:16.923 Latency(us) 00:18:16.923 [2024-12-13T08:29:29.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.923 [2024-12-13T08:29:29.289Z] =================================================================================================================== 00:18:16.923 [2024-12-13T08:29:29.289Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3347479 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:16.923 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:16.923 rmmod nvme_tcp 00:18:16.923 rmmod nvme_fabrics 00:18:17.181 rmmod nvme_keyring 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3347412 ']' 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3347412 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3347412 ']' 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3347412 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3347412 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3347412' 00:18:17.181 killing process with pid 3347412 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3347412 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3347412 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:17.181 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:17.182 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:17.182 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:17.182 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.182 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:17.182 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.hj91Afzfyd /tmp/tmp.ZjAg0Q93RN /tmp/tmp.O7Sx0Vvrdq 00:18:19.715 00:18:19.715 real 1m18.380s 00:18:19.715 user 2m0.902s 00:18:19.715 sys 0m29.040s 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.715 ************************************ 00:18:19.715 END TEST nvmf_tls 00:18:19.715 ************************************ 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:19.715 ************************************ 00:18:19.715 START TEST nvmf_fips 00:18:19.715 ************************************ 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:19.715 * Looking for test storage... 00:18:19.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.715 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:19.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.715 --rc genhtml_branch_coverage=1 00:18:19.715 --rc genhtml_function_coverage=1 00:18:19.715 --rc genhtml_legend=1 00:18:19.715 --rc geninfo_all_blocks=1 00:18:19.716 --rc geninfo_unexecuted_blocks=1 00:18:19.716 00:18:19.716 ' 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:19.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.716 --rc genhtml_branch_coverage=1 00:18:19.716 --rc genhtml_function_coverage=1 00:18:19.716 --rc genhtml_legend=1 00:18:19.716 --rc geninfo_all_blocks=1 00:18:19.716 --rc geninfo_unexecuted_blocks=1 00:18:19.716 00:18:19.716 ' 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:19.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.716 --rc genhtml_branch_coverage=1 00:18:19.716 --rc genhtml_function_coverage=1 00:18:19.716 --rc genhtml_legend=1 00:18:19.716 --rc geninfo_all_blocks=1 00:18:19.716 --rc geninfo_unexecuted_blocks=1 00:18:19.716 00:18:19.716 ' 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:19.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.716 --rc genhtml_branch_coverage=1 00:18:19.716 --rc genhtml_function_coverage=1 00:18:19.716 --rc genhtml_legend=1 00:18:19.716 --rc geninfo_all_blocks=1 00:18:19.716 --rc geninfo_unexecuted_blocks=1 00:18:19.716 00:18:19.716 ' 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:19.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.716 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:19.717 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:19.717 Error setting digest 00:18:19.717 40D25716537F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:19.717 40D25716537F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:19.717 09:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:24.985 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:24.985 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:24.986 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:24.986 Found net devices under 0000:af:00.0: cvl_0_0 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:24.986 Found net devices under 0000:af:00.1: cvl_0_1 00:18:24.986 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:25.245 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:25.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:25.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:18:25.504 00:18:25.504 --- 10.0.0.2 ping statistics --- 00:18:25.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.504 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:25.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:25.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:18:25.504 00:18:25.504 --- 10.0.0.1 ping statistics --- 00:18:25.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.504 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3351419 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3351419 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3351419 ']' 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.504 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:25.504 [2024-12-13 09:29:37.828563] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:25.504 [2024-12-13 09:29:37.828613] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.763 [2024-12-13 09:29:37.893189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.763 [2024-12-13 09:29:37.930232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.764 [2024-12-13 09:29:37.930270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.764 [2024-12-13 09:29:37.930278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.764 [2024-12-13 09:29:37.930285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.764 [2024-12-13 09:29:37.930290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.764 [2024-12-13 09:29:37.930773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.764 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.764 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:25.764 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:25.764 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:25.764 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:25.764 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.764 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:25.764 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:25.764 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:25.764 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Mxl 00:18:25.764 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:25.764 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Mxl 00:18:25.764 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Mxl 00:18:25.764 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Mxl 00:18:25.764 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:26.023 [2024-12-13 09:29:38.243460] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.023 [2024-12-13 09:29:38.259461] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:26.023 [2024-12-13 09:29:38.259664] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.023 malloc0 00:18:26.023 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:26.023 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3351653 00:18:26.023 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:26.023 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3351653 /var/tmp/bdevperf.sock 00:18:26.023 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3351653 ']' 00:18:26.023 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:26.023 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.023 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:26.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:26.023 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.023 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:26.023 [2024-12-13 09:29:38.387651] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:26.023 [2024-12-13 09:29:38.387700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351653 ] 00:18:26.282 [2024-12-13 09:29:38.445564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.282 [2024-12-13 09:29:38.486024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.282 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.282 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:26.282 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Mxl 00:18:26.540 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:26.799 [2024-12-13 09:29:38.938792] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:26.799 TLSTESTn1 00:18:26.799 09:29:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:26.799 Running I/O for 10 seconds... 00:18:29.110 5238.00 IOPS, 20.46 MiB/s [2024-12-13T08:29:42.411Z] 5353.00 IOPS, 20.91 MiB/s [2024-12-13T08:29:43.346Z] 5418.00 IOPS, 21.16 MiB/s [2024-12-13T08:29:44.278Z] 5428.75 IOPS, 21.21 MiB/s [2024-12-13T08:29:45.212Z] 5439.00 IOPS, 21.25 MiB/s [2024-12-13T08:29:46.145Z] 5451.33 IOPS, 21.29 MiB/s [2024-12-13T08:29:47.521Z] 5470.57 IOPS, 21.37 MiB/s [2024-12-13T08:29:48.455Z] 5495.38 IOPS, 21.47 MiB/s [2024-12-13T08:29:49.391Z] 5488.33 IOPS, 21.44 MiB/s [2024-12-13T08:29:49.391Z] 5468.90 IOPS, 21.36 MiB/s 00:18:37.025 Latency(us) 00:18:37.025 [2024-12-13T08:29:49.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.025 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:37.025 Verification LBA range: start 0x0 length 0x2000 00:18:37.025 TLSTESTn1 : 10.03 5462.97 21.34 0.00 0.00 23378.13 6147.90 41194.06 00:18:37.025 [2024-12-13T08:29:49.391Z] =================================================================================================================== 00:18:37.025 [2024-12-13T08:29:49.391Z] Total : 5462.97 21.34 0.00 0.00 23378.13 6147.90 41194.06 00:18:37.025 { 00:18:37.025 "results": [ 00:18:37.025 { 00:18:37.025 "job": "TLSTESTn1", 00:18:37.025 "core_mask": "0x4", 00:18:37.025 "workload": "verify", 00:18:37.025 "status": "finished", 00:18:37.025 "verify_range": { 00:18:37.025 "start": 0, 00:18:37.025 "length": 8192 00:18:37.025 }, 00:18:37.025 "queue_depth": 128, 00:18:37.025 "io_size": 4096, 00:18:37.025 "runtime": 10.034289, 00:18:37.025 "iops": 5462.968028925617, 00:18:37.025 "mibps": 21.33971886299069, 00:18:37.025 "io_failed": 0, 00:18:37.025 "io_timeout": 0, 00:18:37.025 "avg_latency_us": 23378.126880277843, 00:18:37.025 "min_latency_us": 6147.900952380953, 00:18:37.025 "max_latency_us": 41194.05714285714 00:18:37.025 } 00:18:37.025 ], 00:18:37.025 "core_count": 1 00:18:37.025 } 00:18:37.025 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:37.025 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:37.025 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:18:37.025 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:37.026 nvmf_trace.0 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3351653 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3351653 ']' 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3351653 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3351653 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3351653' 00:18:37.026 killing process with pid 3351653 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3351653 00:18:37.026 Received shutdown signal, test time was about 10.000000 seconds 00:18:37.026 00:18:37.026 Latency(us) 00:18:37.026 [2024-12-13T08:29:49.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.026 [2024-12-13T08:29:49.392Z] =================================================================================================================== 00:18:37.026 [2024-12-13T08:29:49.392Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:37.026 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3351653 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:37.284 rmmod nvme_tcp 00:18:37.284 rmmod nvme_fabrics 00:18:37.284 rmmod nvme_keyring 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3351419 ']' 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3351419 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3351419 ']' 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3351419 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3351419 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3351419' 00:18:37.284 killing process with pid 3351419 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3351419 00:18:37.284 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3351419 00:18:37.543 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:37.543 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:37.543 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:37.543 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:18:37.543 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:18:37.543 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:37.543 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:18:37.543 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:37.543 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:37.543 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.543 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.543 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.077 09:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:40.078 09:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Mxl 00:18:40.078 00:18:40.078 real 0m20.164s 00:18:40.078 user 0m21.097s 00:18:40.078 sys 0m9.305s 00:18:40.078 09:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.078 09:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:40.078 ************************************ 00:18:40.078 END TEST nvmf_fips 00:18:40.078 ************************************ 00:18:40.078 09:29:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:40.078 09:29:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:40.078 09:29:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.078 09:29:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:40.078 ************************************ 00:18:40.078 START TEST nvmf_control_msg_list 00:18:40.078 ************************************ 00:18:40.078 09:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:40.078 * Looking for test storage... 00:18:40.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:40.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.078 --rc genhtml_branch_coverage=1 00:18:40.078 --rc genhtml_function_coverage=1 00:18:40.078 --rc genhtml_legend=1 00:18:40.078 --rc geninfo_all_blocks=1 00:18:40.078 --rc geninfo_unexecuted_blocks=1 00:18:40.078 00:18:40.078 ' 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:40.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.078 --rc genhtml_branch_coverage=1 00:18:40.078 --rc genhtml_function_coverage=1 00:18:40.078 --rc genhtml_legend=1 00:18:40.078 --rc geninfo_all_blocks=1 00:18:40.078 --rc geninfo_unexecuted_blocks=1 00:18:40.078 00:18:40.078 ' 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:40.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.078 --rc genhtml_branch_coverage=1 00:18:40.078 --rc genhtml_function_coverage=1 00:18:40.078 --rc genhtml_legend=1 00:18:40.078 --rc geninfo_all_blocks=1 00:18:40.078 --rc geninfo_unexecuted_blocks=1 00:18:40.078 00:18:40.078 ' 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:40.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.078 --rc genhtml_branch_coverage=1 00:18:40.078 --rc genhtml_function_coverage=1 00:18:40.078 --rc genhtml_legend=1 00:18:40.078 --rc geninfo_all_blocks=1 00:18:40.078 --rc geninfo_unexecuted_blocks=1 00:18:40.078 00:18:40.078 ' 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.078 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:40.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:18:40.079 09:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:45.350 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:45.350 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:45.350 Found net devices under 0000:af:00.0: cvl_0_0 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:45.350 Found net devices under 0000:af:00.1: cvl_0_1 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.350 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.351 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.351 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:45.351 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:45.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:18:45.609 00:18:45.609 --- 10.0.0.2 ping statistics --- 00:18:45.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.609 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:18:45.609 00:18:45.609 --- 10.0.0.1 ping statistics --- 00:18:45.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.609 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3356854 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3356854 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3356854 ']' 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.609 09:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:45.609 [2024-12-13 09:29:57.822388] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:45.609 [2024-12-13 09:29:57.822434] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.609 [2024-12-13 09:29:57.883558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.609 [2024-12-13 09:29:57.924513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.609 [2024-12-13 09:29:57.924547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.609 [2024-12-13 09:29:57.924555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.609 [2024-12-13 09:29:57.924560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.609 [2024-12-13 09:29:57.924566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.609 [2024-12-13 09:29:57.925064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:45.868 [2024-12-13 09:29:58.064453] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:45.868 Malloc0 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:45.868 [2024-12-13 09:29:58.104671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3356927 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3356928 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3356929 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3356927 00:18:45.868 09:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:45.868 [2024-12-13 09:29:58.173219] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:45.868 [2024-12-13 09:29:58.173393] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:45.868 [2024-12-13 09:29:58.183241] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:47.241 Initializing NVMe Controllers 00:18:47.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:18:47.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:18:47.241 Initialization complete. Launching workers. 00:18:47.241 ======================================================== 00:18:47.241 Latency(us) 00:18:47.241 Device Information : IOPS MiB/s Average min max 00:18:47.241 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5098.95 19.92 195.76 142.63 399.44 00:18:47.241 ======================================================== 00:18:47.241 Total : 5098.95 19.92 195.76 142.63 399.44 00:18:47.241 00:18:47.241 Initializing NVMe Controllers 00:18:47.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:18:47.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:18:47.241 Initialization complete. Launching workers. 00:18:47.241 ======================================================== 00:18:47.241 Latency(us) 00:18:47.241 Device Information : IOPS MiB/s Average min max 00:18:47.241 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4641.00 18.13 215.08 146.76 40413.80 00:18:47.241 ======================================================== 00:18:47.241 Total : 4641.00 18.13 215.08 146.76 40413.80 00:18:47.241 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3356928 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3356929 00:18:47.241 Initializing NVMe Controllers 00:18:47.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:18:47.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:18:47.241 Initialization complete. Launching workers. 00:18:47.241 ======================================================== 00:18:47.241 Latency(us) 00:18:47.241 Device Information : IOPS MiB/s Average min max 00:18:47.241 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4774.00 18.65 209.09 150.30 41048.18 00:18:47.241 ======================================================== 00:18:47.241 Total : 4774.00 18.65 209.09 150.30 41048.18 00:18:47.241 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:47.241 rmmod nvme_tcp 00:18:47.241 rmmod nvme_fabrics 00:18:47.241 rmmod nvme_keyring 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3356854 ']' 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3356854 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3356854 ']' 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3356854 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3356854 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3356854' 00:18:47.241 killing process with pid 3356854 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3356854 00:18:47.241 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3356854 00:18:47.500 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:47.500 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:47.500 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:47.500 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:18:47.500 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:18:47.500 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:47.500 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:18:47.500 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:47.500 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:47.500 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.500 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.500 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.400 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:49.400 00:18:49.400 real 0m9.819s 00:18:49.400 user 0m6.363s 00:18:49.400 sys 0m5.382s 00:18:49.400 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.400 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:49.400 ************************************ 00:18:49.400 END TEST nvmf_control_msg_list 00:18:49.400 ************************************ 00:18:49.658 09:30:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:49.658 09:30:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:49.658 09:30:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.658 09:30:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:49.658 ************************************ 00:18:49.658 START TEST nvmf_wait_for_buf 00:18:49.658 ************************************ 00:18:49.658 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:49.658 * Looking for test storage... 00:18:49.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.658 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:49.658 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:49.658 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:49.658 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:49.658 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.658 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.658 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.658 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:49.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.659 --rc genhtml_branch_coverage=1 00:18:49.659 --rc genhtml_function_coverage=1 00:18:49.659 --rc genhtml_legend=1 00:18:49.659 --rc geninfo_all_blocks=1 00:18:49.659 --rc geninfo_unexecuted_blocks=1 00:18:49.659 00:18:49.659 ' 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:49.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.659 --rc genhtml_branch_coverage=1 00:18:49.659 --rc genhtml_function_coverage=1 00:18:49.659 --rc genhtml_legend=1 00:18:49.659 --rc geninfo_all_blocks=1 00:18:49.659 --rc geninfo_unexecuted_blocks=1 00:18:49.659 00:18:49.659 ' 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:49.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.659 --rc genhtml_branch_coverage=1 00:18:49.659 --rc genhtml_function_coverage=1 00:18:49.659 --rc genhtml_legend=1 00:18:49.659 --rc geninfo_all_blocks=1 00:18:49.659 --rc geninfo_unexecuted_blocks=1 00:18:49.659 00:18:49.659 ' 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:49.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.659 --rc genhtml_branch_coverage=1 00:18:49.659 --rc genhtml_function_coverage=1 00:18:49.659 --rc genhtml_legend=1 00:18:49.659 --rc geninfo_all_blocks=1 00:18:49.659 --rc geninfo_unexecuted_blocks=1 00:18:49.659 00:18:49.659 ' 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:49.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.659 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.660 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.660 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:49.660 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:49.660 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:18:49.660 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:56.221 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:56.221 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.221 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:56.222 Found net devices under 0000:af:00.0: cvl_0_0 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:56.222 Found net devices under 0000:af:00.1: cvl_0_1 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:56.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:18:56.222 00:18:56.222 --- 10.0.0.2 ping statistics --- 00:18:56.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.222 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:56.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:18:56.222 00:18:56.222 --- 10.0.0.1 ping statistics --- 00:18:56.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.222 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3360745 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3360745 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3360745 ']' 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:56.222 [2024-12-13 09:30:07.808498] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:18:56.222 [2024-12-13 09:30:07.808549] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.222 [2024-12-13 09:30:07.875172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.222 [2024-12-13 09:30:07.914809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.222 [2024-12-13 09:30:07.914860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.222 [2024-12-13 09:30:07.914867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.222 [2024-12-13 09:30:07.914874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.222 [2024-12-13 09:30:07.914879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.222 [2024-12-13 09:30:07.915367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.222 09:30:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:56.222 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.222 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:18:56.222 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.222 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:56.223 Malloc0 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:56.223 [2024-12-13 09:30:08.105466] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:56.223 [2024-12-13 09:30:08.133673] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.223 09:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:56.223 [2024-12-13 09:30:08.219528] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:57.598 Initializing NVMe Controllers 00:18:57.598 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:18:57.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:18:57.598 Initialization complete. Launching workers. 00:18:57.598 ======================================================== 00:18:57.598 Latency(us) 00:18:57.598 Device Information : IOPS MiB/s Average min max 00:18:57.598 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 123.85 15.48 33453.42 6871.97 71835.21 00:18:57.598 ======================================================== 00:18:57.598 Total : 123.85 15.48 33453.42 6871.97 71835.21 00:18:57.598 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:57.598 rmmod nvme_tcp 00:18:57.598 rmmod nvme_fabrics 00:18:57.598 rmmod nvme_keyring 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3360745 ']' 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3360745 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3360745 ']' 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3360745 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3360745 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3360745' 00:18:57.598 killing process with pid 3360745 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3360745 00:18:57.598 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3360745 00:18:57.856 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:57.856 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:57.856 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:57.856 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:18:57.856 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:18:57.856 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:57.856 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:18:57.856 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:57.856 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:57.856 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.856 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.856 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.758 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:59.758 00:18:59.758 real 0m10.245s 00:18:59.758 user 0m3.919s 00:18:59.758 sys 0m4.750s 00:18:59.758 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:59.758 09:30:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:59.758 ************************************ 00:18:59.758 END TEST nvmf_wait_for_buf 00:18:59.758 ************************************ 00:18:59.758 09:30:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:18:59.758 09:30:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:18:59.758 09:30:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:18:59.758 09:30:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:18:59.758 09:30:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:18:59.758 09:30:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:05.021 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:05.021 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:05.021 Found net devices under 0000:af:00.0: cvl_0_0 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:05.021 Found net devices under 0000:af:00.1: cvl_0_1 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:05.021 ************************************ 00:19:05.021 START TEST nvmf_perf_adq 00:19:05.021 ************************************ 00:19:05.021 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:05.281 * Looking for test storage... 00:19:05.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:05.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.281 --rc genhtml_branch_coverage=1 00:19:05.281 --rc genhtml_function_coverage=1 00:19:05.281 --rc genhtml_legend=1 00:19:05.281 --rc geninfo_all_blocks=1 00:19:05.281 --rc geninfo_unexecuted_blocks=1 00:19:05.281 00:19:05.281 ' 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:05.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.281 --rc genhtml_branch_coverage=1 00:19:05.281 --rc genhtml_function_coverage=1 00:19:05.281 --rc genhtml_legend=1 00:19:05.281 --rc geninfo_all_blocks=1 00:19:05.281 --rc geninfo_unexecuted_blocks=1 00:19:05.281 00:19:05.281 ' 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:05.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.281 --rc genhtml_branch_coverage=1 00:19:05.281 --rc genhtml_function_coverage=1 00:19:05.281 --rc genhtml_legend=1 00:19:05.281 --rc geninfo_all_blocks=1 00:19:05.281 --rc geninfo_unexecuted_blocks=1 00:19:05.281 00:19:05.281 ' 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:05.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.281 --rc genhtml_branch_coverage=1 00:19:05.281 --rc genhtml_function_coverage=1 00:19:05.281 --rc genhtml_legend=1 00:19:05.281 --rc geninfo_all_blocks=1 00:19:05.281 --rc geninfo_unexecuted_blocks=1 00:19:05.281 00:19:05.281 ' 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:05.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:05.281 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:05.282 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:05.282 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:05.282 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:05.282 09:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:10.719 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:10.719 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:10.719 Found net devices under 0000:af:00.0: cvl_0_0 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.719 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:10.720 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:10.720 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.720 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:10.720 Found net devices under 0000:af:00.1: cvl_0_1 00:19:10.720 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.720 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:10.720 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:10.720 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:10.720 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:10.720 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:10.720 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:10.720 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:11.655 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:14.184 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:19.457 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.457 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:19.458 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:19.458 Found net devices under 0000:af:00.0: cvl_0_0 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:19.458 Found net devices under 0000:af:00.1: cvl_0_1 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:19.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:19:19.458 00:19:19.458 --- 10.0.0.2 ping statistics --- 00:19:19.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.458 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:19.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:19:19.458 00:19:19.458 --- 10.0.0.1 ping statistics --- 00:19:19.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.458 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3369432 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3369432 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3369432 ']' 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.458 09:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.716 [2024-12-13 09:30:31.851467] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:19:19.716 [2024-12-13 09:30:31.851518] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.716 [2024-12-13 09:30:31.918326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:19.716 [2024-12-13 09:30:31.960398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.716 [2024-12-13 09:30:31.960437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.716 [2024-12-13 09:30:31.960444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.716 [2024-12-13 09:30:31.960454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.716 [2024-12-13 09:30:31.960459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.716 [2024-12-13 09:30:31.961784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.716 [2024-12-13 09:30:31.961883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.716 [2024-12-13 09:30:31.961972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:19.716 [2024-12-13 09:30:31.961973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.716 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.716 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:19.716 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:19.716 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:19.716 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.716 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.716 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:19.716 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:19.716 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:19.716 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.716 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.716 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.974 [2024-12-13 09:30:32.179912] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.974 Malloc1 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:19.974 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.975 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.975 [2024-12-13 09:30:32.241582] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.975 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.975 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3369555 00:19:19.975 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:19.975 09:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:22.497 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:22.497 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.497 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.497 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.497 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:22.497 "tick_rate": 2100000000, 00:19:22.497 "poll_groups": [ 00:19:22.497 { 00:19:22.497 "name": "nvmf_tgt_poll_group_000", 00:19:22.497 "admin_qpairs": 1, 00:19:22.497 "io_qpairs": 1, 00:19:22.497 "current_admin_qpairs": 1, 00:19:22.497 "current_io_qpairs": 1, 00:19:22.497 "pending_bdev_io": 0, 00:19:22.497 "completed_nvme_io": 20636, 00:19:22.497 "transports": [ 00:19:22.497 { 00:19:22.497 "trtype": "TCP" 00:19:22.497 } 00:19:22.497 ] 00:19:22.497 }, 00:19:22.497 { 00:19:22.497 "name": "nvmf_tgt_poll_group_001", 00:19:22.497 "admin_qpairs": 0, 00:19:22.497 "io_qpairs": 1, 00:19:22.497 "current_admin_qpairs": 0, 00:19:22.497 "current_io_qpairs": 1, 00:19:22.497 "pending_bdev_io": 0, 00:19:22.497 "completed_nvme_io": 20609, 00:19:22.497 "transports": [ 00:19:22.497 { 00:19:22.497 "trtype": "TCP" 00:19:22.497 } 00:19:22.497 ] 00:19:22.497 }, 00:19:22.497 { 00:19:22.497 "name": "nvmf_tgt_poll_group_002", 00:19:22.497 "admin_qpairs": 0, 00:19:22.497 "io_qpairs": 1, 00:19:22.497 "current_admin_qpairs": 0, 00:19:22.497 "current_io_qpairs": 1, 00:19:22.497 "pending_bdev_io": 0, 00:19:22.497 "completed_nvme_io": 20589, 00:19:22.497 "transports": [ 00:19:22.497 { 00:19:22.497 "trtype": "TCP" 00:19:22.497 } 00:19:22.497 ] 00:19:22.497 }, 00:19:22.497 { 00:19:22.497 "name": "nvmf_tgt_poll_group_003", 00:19:22.497 "admin_qpairs": 0, 00:19:22.497 "io_qpairs": 1, 00:19:22.497 "current_admin_qpairs": 0, 00:19:22.497 "current_io_qpairs": 1, 00:19:22.497 "pending_bdev_io": 0, 00:19:22.497 "completed_nvme_io": 20311, 00:19:22.497 "transports": [ 00:19:22.497 { 00:19:22.497 "trtype": "TCP" 00:19:22.497 } 00:19:22.497 ] 00:19:22.497 } 00:19:22.497 ] 00:19:22.497 }' 00:19:22.497 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:22.497 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:22.497 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:22.497 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:22.497 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3369555 00:19:30.593 Initializing NVMe Controllers 00:19:30.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:30.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:30.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:30.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:30.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:30.593 Initialization complete. Launching workers. 00:19:30.593 ======================================================== 00:19:30.593 Latency(us) 00:19:30.593 Device Information : IOPS MiB/s Average min max 00:19:30.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10988.95 42.93 5825.50 1863.60 9805.56 00:19:30.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10934.75 42.71 5854.12 1684.20 10346.56 00:19:30.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10816.36 42.25 5915.62 1592.41 10173.26 00:19:30.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10917.75 42.65 5862.51 1532.42 11039.47 00:19:30.593 ======================================================== 00:19:30.593 Total : 43657.81 170.54 5864.25 1532.42 11039.47 00:19:30.593 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:30.593 rmmod nvme_tcp 00:19:30.593 rmmod nvme_fabrics 00:19:30.593 rmmod nvme_keyring 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3369432 ']' 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3369432 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3369432 ']' 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3369432 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3369432 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3369432' 00:19:30.593 killing process with pid 3369432 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3369432 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3369432 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:30.593 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:30.594 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:30.594 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:30.594 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:30.594 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.594 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.594 09:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.497 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:32.497 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:19:32.497 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:32.497 09:30:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:33.873 09:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:36.406 09:30:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:41.679 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:41.679 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:41.679 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:41.680 Found net devices under 0000:af:00.0: cvl_0_0 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:41.680 Found net devices under 0000:af:00.1: cvl_0_1 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:41.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.747 ms 00:19:41.680 00:19:41.680 --- 10.0.0.2 ping statistics --- 00:19:41.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.680 rtt min/avg/max/mdev = 0.747/0.747/0.747/0.000 ms 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:41.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:19:41.680 00:19:41.680 --- 10.0.0.1 ping statistics --- 00:19:41.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.680 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:41.680 net.core.busy_poll = 1 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:41.680 net.core.busy_read = 1 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:41.680 09:30:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3373483 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3373483 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3373483 ']' 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:41.939 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:41.939 [2024-12-13 09:30:54.235174] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:19:41.939 [2024-12-13 09:30:54.235220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.939 [2024-12-13 09:30:54.302217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:42.198 [2024-12-13 09:30:54.344206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.198 [2024-12-13 09:30:54.344242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.198 [2024-12-13 09:30:54.344250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.198 [2024-12-13 09:30:54.344256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.198 [2024-12-13 09:30:54.344261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.198 [2024-12-13 09:30:54.345623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.198 [2024-12-13 09:30:54.345642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.198 [2024-12-13 09:30:54.345732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:42.198 [2024-12-13 09:30:54.345734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.198 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.198 [2024-12-13 09:30:54.555837] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.199 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.199 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:42.199 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.199 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.457 Malloc1 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.457 [2024-12-13 09:30:54.617415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3373682 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:19:42.457 09:30:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:44.362 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:19:44.362 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.362 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.362 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.362 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:19:44.362 "tick_rate": 2100000000, 00:19:44.362 "poll_groups": [ 00:19:44.362 { 00:19:44.362 "name": "nvmf_tgt_poll_group_000", 00:19:44.362 "admin_qpairs": 1, 00:19:44.362 "io_qpairs": 1, 00:19:44.362 "current_admin_qpairs": 1, 00:19:44.362 "current_io_qpairs": 1, 00:19:44.362 "pending_bdev_io": 0, 00:19:44.362 "completed_nvme_io": 28251, 00:19:44.362 "transports": [ 00:19:44.362 { 00:19:44.362 "trtype": "TCP" 00:19:44.362 } 00:19:44.362 ] 00:19:44.362 }, 00:19:44.362 { 00:19:44.362 "name": "nvmf_tgt_poll_group_001", 00:19:44.362 "admin_qpairs": 0, 00:19:44.362 "io_qpairs": 3, 00:19:44.362 "current_admin_qpairs": 0, 00:19:44.362 "current_io_qpairs": 3, 00:19:44.362 "pending_bdev_io": 0, 00:19:44.362 "completed_nvme_io": 28483, 00:19:44.362 "transports": [ 00:19:44.362 { 00:19:44.362 "trtype": "TCP" 00:19:44.362 } 00:19:44.362 ] 00:19:44.362 }, 00:19:44.362 { 00:19:44.362 "name": "nvmf_tgt_poll_group_002", 00:19:44.362 "admin_qpairs": 0, 00:19:44.362 "io_qpairs": 0, 00:19:44.362 "current_admin_qpairs": 0, 00:19:44.362 "current_io_qpairs": 0, 00:19:44.362 "pending_bdev_io": 0, 00:19:44.362 "completed_nvme_io": 0, 00:19:44.362 "transports": [ 00:19:44.362 { 00:19:44.362 "trtype": "TCP" 00:19:44.362 } 00:19:44.362 ] 00:19:44.362 }, 00:19:44.362 { 00:19:44.362 "name": "nvmf_tgt_poll_group_003", 00:19:44.362 "admin_qpairs": 0, 00:19:44.362 "io_qpairs": 0, 00:19:44.362 "current_admin_qpairs": 0, 00:19:44.362 "current_io_qpairs": 0, 00:19:44.362 "pending_bdev_io": 0, 00:19:44.362 "completed_nvme_io": 0, 00:19:44.362 "transports": [ 00:19:44.362 { 00:19:44.362 "trtype": "TCP" 00:19:44.362 } 00:19:44.362 ] 00:19:44.362 } 00:19:44.362 ] 00:19:44.362 }' 00:19:44.362 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:44.362 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:19:44.362 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:19:44.362 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:19:44.362 09:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3373682 00:19:52.480 Initializing NVMe Controllers 00:19:52.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:52.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:52.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:52.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:52.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:52.480 Initialization complete. Launching workers. 00:19:52.480 ======================================================== 00:19:52.480 Latency(us) 00:19:52.480 Device Information : IOPS MiB/s Average min max 00:19:52.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5231.80 20.44 12232.49 1589.64 59664.96 00:19:52.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4997.80 19.52 12841.60 1693.72 58474.48 00:19:52.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4894.90 19.12 13073.03 1633.22 57931.81 00:19:52.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 15182.50 59.31 4214.78 1559.67 6944.01 00:19:52.480 ======================================================== 00:19:52.480 Total : 30307.00 118.39 8452.17 1559.67 59664.96 00:19:52.480 00:19:52.480 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:19:52.480 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:52.480 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:52.480 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:52.480 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:52.480 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:52.480 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:52.480 rmmod nvme_tcp 00:19:52.740 rmmod nvme_fabrics 00:19:52.740 rmmod nvme_keyring 00:19:52.740 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:52.740 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:52.740 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:52.740 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3373483 ']' 00:19:52.740 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3373483 00:19:52.740 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3373483 ']' 00:19:52.740 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3373483 00:19:52.740 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:52.740 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.740 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3373483 00:19:52.740 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:52.740 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:52.740 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3373483' 00:19:52.740 killing process with pid 3373483 00:19:52.740 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3373483 00:19:52.740 09:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3373483 00:19:52.999 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:52.999 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:52.999 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:52.999 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:52.999 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:52.999 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:52.999 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:52.999 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:52.999 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:52.999 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.999 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.999 09:31:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:19:56.289 00:19:56.289 real 0m50.887s 00:19:56.289 user 2m44.002s 00:19:56.289 sys 0m9.889s 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:56.289 ************************************ 00:19:56.289 END TEST nvmf_perf_adq 00:19:56.289 ************************************ 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:56.289 ************************************ 00:19:56.289 START TEST nvmf_shutdown 00:19:56.289 ************************************ 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:56.289 * Looking for test storage... 00:19:56.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.289 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:56.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.289 --rc genhtml_branch_coverage=1 00:19:56.289 --rc genhtml_function_coverage=1 00:19:56.289 --rc genhtml_legend=1 00:19:56.289 --rc geninfo_all_blocks=1 00:19:56.290 --rc geninfo_unexecuted_blocks=1 00:19:56.290 00:19:56.290 ' 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:56.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.290 --rc genhtml_branch_coverage=1 00:19:56.290 --rc genhtml_function_coverage=1 00:19:56.290 --rc genhtml_legend=1 00:19:56.290 --rc geninfo_all_blocks=1 00:19:56.290 --rc geninfo_unexecuted_blocks=1 00:19:56.290 00:19:56.290 ' 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:56.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.290 --rc genhtml_branch_coverage=1 00:19:56.290 --rc genhtml_function_coverage=1 00:19:56.290 --rc genhtml_legend=1 00:19:56.290 --rc geninfo_all_blocks=1 00:19:56.290 --rc geninfo_unexecuted_blocks=1 00:19:56.290 00:19:56.290 ' 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:56.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.290 --rc genhtml_branch_coverage=1 00:19:56.290 --rc genhtml_function_coverage=1 00:19:56.290 --rc genhtml_legend=1 00:19:56.290 --rc geninfo_all_blocks=1 00:19:56.290 --rc geninfo_unexecuted_blocks=1 00:19:56.290 00:19:56.290 ' 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:56.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:56.290 ************************************ 00:19:56.290 START TEST nvmf_shutdown_tc1 00:19:56.290 ************************************ 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:56.290 09:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.858 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.858 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:02.858 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:02.858 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:02.858 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:02.858 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:02.858 09:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:02.858 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:02.858 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:02.858 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:02.858 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:02.858 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:02.858 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:02.859 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:02.859 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:02.859 Found net devices under 0000:af:00.0: cvl_0_0 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:02.859 Found net devices under 0000:af:00.1: cvl_0_1 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:02.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:20:02.859 00:20:02.859 --- 10.0.0.2 ping statistics --- 00:20:02.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.859 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:20:02.859 00:20:02.859 --- 10.0.0.1 ping statistics --- 00:20:02.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.859 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:02.859 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3379059 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3379059 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3379059 ']' 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.860 [2024-12-13 09:31:14.369529] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:02.860 [2024-12-13 09:31:14.369581] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.860 [2024-12-13 09:31:14.439219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.860 [2024-12-13 09:31:14.480778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.860 [2024-12-13 09:31:14.480812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.860 [2024-12-13 09:31:14.480819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.860 [2024-12-13 09:31:14.480825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.860 [2024-12-13 09:31:14.480831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.860 [2024-12-13 09:31:14.482307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.860 [2024-12-13 09:31:14.482393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.860 [2024-12-13 09:31:14.482517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:02.860 [2024-12-13 09:31:14.482518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.860 [2024-12-13 09:31:14.616157] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.860 09:31:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.860 Malloc1 00:20:02.860 [2024-12-13 09:31:14.731467] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.860 Malloc2 00:20:02.860 Malloc3 00:20:02.860 Malloc4 00:20:02.860 Malloc5 00:20:02.860 Malloc6 00:20:02.860 Malloc7 00:20:02.860 Malloc8 00:20:02.860 Malloc9 00:20:02.860 Malloc10 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3379130 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3379130 /var/tmp/bdevperf.sock 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3379130 ']' 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:02.860 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:02.861 { 00:20:02.861 "params": { 00:20:02.861 "name": "Nvme$subsystem", 00:20:02.861 "trtype": "$TEST_TRANSPORT", 00:20:02.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.861 "adrfam": "ipv4", 00:20:02.861 "trsvcid": "$NVMF_PORT", 00:20:02.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.861 "hdgst": ${hdgst:-false}, 00:20:02.861 "ddgst": ${ddgst:-false} 00:20:02.861 }, 00:20:02.861 "method": "bdev_nvme_attach_controller" 00:20:02.861 } 00:20:02.861 EOF 00:20:02.861 )") 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:02.861 { 00:20:02.861 "params": { 00:20:02.861 "name": "Nvme$subsystem", 00:20:02.861 "trtype": "$TEST_TRANSPORT", 00:20:02.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.861 "adrfam": "ipv4", 00:20:02.861 "trsvcid": "$NVMF_PORT", 00:20:02.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.861 "hdgst": ${hdgst:-false}, 00:20:02.861 "ddgst": ${ddgst:-false} 00:20:02.861 }, 00:20:02.861 "method": "bdev_nvme_attach_controller" 00:20:02.861 } 00:20:02.861 EOF 00:20:02.861 )") 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:02.861 { 00:20:02.861 "params": { 00:20:02.861 "name": "Nvme$subsystem", 00:20:02.861 "trtype": "$TEST_TRANSPORT", 00:20:02.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.861 "adrfam": "ipv4", 00:20:02.861 "trsvcid": "$NVMF_PORT", 00:20:02.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.861 "hdgst": ${hdgst:-false}, 00:20:02.861 "ddgst": ${ddgst:-false} 00:20:02.861 }, 00:20:02.861 "method": "bdev_nvme_attach_controller" 00:20:02.861 } 00:20:02.861 EOF 00:20:02.861 )") 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:02.861 { 00:20:02.861 "params": { 00:20:02.861 "name": "Nvme$subsystem", 00:20:02.861 "trtype": "$TEST_TRANSPORT", 00:20:02.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.861 "adrfam": "ipv4", 00:20:02.861 "trsvcid": "$NVMF_PORT", 00:20:02.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.861 "hdgst": ${hdgst:-false}, 00:20:02.861 "ddgst": ${ddgst:-false} 00:20:02.861 }, 00:20:02.861 "method": "bdev_nvme_attach_controller" 00:20:02.861 } 00:20:02.861 EOF 00:20:02.861 )") 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:02.861 { 00:20:02.861 "params": { 00:20:02.861 "name": "Nvme$subsystem", 00:20:02.861 "trtype": "$TEST_TRANSPORT", 00:20:02.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.861 "adrfam": "ipv4", 00:20:02.861 "trsvcid": "$NVMF_PORT", 00:20:02.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.861 "hdgst": ${hdgst:-false}, 00:20:02.861 "ddgst": ${ddgst:-false} 00:20:02.861 }, 00:20:02.861 "method": "bdev_nvme_attach_controller" 00:20:02.861 } 00:20:02.861 EOF 00:20:02.861 )") 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:02.861 { 00:20:02.861 "params": { 00:20:02.861 "name": "Nvme$subsystem", 00:20:02.861 "trtype": "$TEST_TRANSPORT", 00:20:02.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.861 "adrfam": "ipv4", 00:20:02.861 "trsvcid": "$NVMF_PORT", 00:20:02.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.861 "hdgst": ${hdgst:-false}, 00:20:02.861 "ddgst": ${ddgst:-false} 00:20:02.861 }, 00:20:02.861 "method": "bdev_nvme_attach_controller" 00:20:02.861 } 00:20:02.861 EOF 00:20:02.861 )") 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:02.861 [2024-12-13 09:31:15.204066] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:02.861 [2024-12-13 09:31:15.204119] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:02.861 { 00:20:02.861 "params": { 00:20:02.861 "name": "Nvme$subsystem", 00:20:02.861 "trtype": "$TEST_TRANSPORT", 00:20:02.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.861 "adrfam": "ipv4", 00:20:02.861 "trsvcid": "$NVMF_PORT", 00:20:02.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.861 "hdgst": ${hdgst:-false}, 00:20:02.861 "ddgst": ${ddgst:-false} 00:20:02.861 }, 00:20:02.861 "method": "bdev_nvme_attach_controller" 00:20:02.861 } 00:20:02.861 EOF 00:20:02.861 )") 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:02.861 { 00:20:02.861 "params": { 00:20:02.861 "name": "Nvme$subsystem", 00:20:02.861 "trtype": "$TEST_TRANSPORT", 00:20:02.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.861 "adrfam": "ipv4", 00:20:02.861 "trsvcid": "$NVMF_PORT", 00:20:02.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.861 "hdgst": ${hdgst:-false}, 00:20:02.861 "ddgst": ${ddgst:-false} 00:20:02.861 }, 00:20:02.861 "method": "bdev_nvme_attach_controller" 00:20:02.861 } 00:20:02.861 EOF 00:20:02.861 )") 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:02.861 { 00:20:02.861 "params": { 00:20:02.861 "name": "Nvme$subsystem", 00:20:02.861 "trtype": "$TEST_TRANSPORT", 00:20:02.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.861 "adrfam": "ipv4", 00:20:02.861 "trsvcid": "$NVMF_PORT", 00:20:02.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.861 "hdgst": ${hdgst:-false}, 00:20:02.861 "ddgst": ${ddgst:-false} 00:20:02.861 }, 00:20:02.861 "method": "bdev_nvme_attach_controller" 00:20:02.861 } 00:20:02.861 EOF 00:20:02.861 )") 00:20:02.861 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:03.121 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:03.121 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:03.121 { 00:20:03.121 "params": { 00:20:03.121 "name": "Nvme$subsystem", 00:20:03.121 "trtype": "$TEST_TRANSPORT", 00:20:03.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.121 "adrfam": "ipv4", 00:20:03.121 "trsvcid": "$NVMF_PORT", 00:20:03.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.121 "hdgst": ${hdgst:-false}, 00:20:03.121 "ddgst": ${ddgst:-false} 00:20:03.121 }, 00:20:03.121 "method": "bdev_nvme_attach_controller" 00:20:03.121 } 00:20:03.121 EOF 00:20:03.121 )") 00:20:03.121 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:03.121 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:03.121 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:03.121 09:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:03.121 "params": { 00:20:03.121 "name": "Nvme1", 00:20:03.121 "trtype": "tcp", 00:20:03.121 "traddr": "10.0.0.2", 00:20:03.121 "adrfam": "ipv4", 00:20:03.121 "trsvcid": "4420", 00:20:03.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.121 "hdgst": false, 00:20:03.121 "ddgst": false 00:20:03.121 }, 00:20:03.121 "method": "bdev_nvme_attach_controller" 00:20:03.121 },{ 00:20:03.121 "params": { 00:20:03.121 "name": "Nvme2", 00:20:03.121 "trtype": "tcp", 00:20:03.121 "traddr": "10.0.0.2", 00:20:03.121 "adrfam": "ipv4", 00:20:03.121 "trsvcid": "4420", 00:20:03.121 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:03.121 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:03.121 "hdgst": false, 00:20:03.121 "ddgst": false 00:20:03.121 }, 00:20:03.121 "method": "bdev_nvme_attach_controller" 00:20:03.121 },{ 00:20:03.121 "params": { 00:20:03.121 "name": "Nvme3", 00:20:03.121 "trtype": "tcp", 00:20:03.121 "traddr": "10.0.0.2", 00:20:03.121 "adrfam": "ipv4", 00:20:03.121 "trsvcid": "4420", 00:20:03.121 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:03.121 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:03.121 "hdgst": false, 00:20:03.121 "ddgst": false 00:20:03.121 }, 00:20:03.121 "method": "bdev_nvme_attach_controller" 00:20:03.121 },{ 00:20:03.121 "params": { 00:20:03.121 "name": "Nvme4", 00:20:03.121 "trtype": "tcp", 00:20:03.121 "traddr": "10.0.0.2", 00:20:03.121 "adrfam": "ipv4", 00:20:03.121 "trsvcid": "4420", 00:20:03.121 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:03.121 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:03.121 "hdgst": false, 00:20:03.121 "ddgst": false 00:20:03.121 }, 00:20:03.121 "method": "bdev_nvme_attach_controller" 00:20:03.121 },{ 00:20:03.121 "params": { 00:20:03.121 "name": "Nvme5", 00:20:03.121 "trtype": "tcp", 00:20:03.121 "traddr": "10.0.0.2", 00:20:03.121 "adrfam": "ipv4", 00:20:03.121 "trsvcid": "4420", 00:20:03.121 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:03.121 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:03.121 "hdgst": false, 00:20:03.121 "ddgst": false 00:20:03.121 }, 00:20:03.121 "method": "bdev_nvme_attach_controller" 00:20:03.121 },{ 00:20:03.121 "params": { 00:20:03.121 "name": "Nvme6", 00:20:03.121 "trtype": "tcp", 00:20:03.121 "traddr": "10.0.0.2", 00:20:03.121 "adrfam": "ipv4", 00:20:03.121 "trsvcid": "4420", 00:20:03.121 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:03.121 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:03.121 "hdgst": false, 00:20:03.121 "ddgst": false 00:20:03.121 }, 00:20:03.121 "method": "bdev_nvme_attach_controller" 00:20:03.121 },{ 00:20:03.121 "params": { 00:20:03.121 "name": "Nvme7", 00:20:03.121 "trtype": "tcp", 00:20:03.121 "traddr": "10.0.0.2", 00:20:03.121 "adrfam": "ipv4", 00:20:03.121 "trsvcid": "4420", 00:20:03.121 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:03.121 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:03.121 "hdgst": false, 00:20:03.121 "ddgst": false 00:20:03.121 }, 00:20:03.121 "method": "bdev_nvme_attach_controller" 00:20:03.121 },{ 00:20:03.121 "params": { 00:20:03.121 "name": "Nvme8", 00:20:03.121 "trtype": "tcp", 00:20:03.121 "traddr": "10.0.0.2", 00:20:03.121 "adrfam": "ipv4", 00:20:03.121 "trsvcid": "4420", 00:20:03.121 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:03.121 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:03.121 "hdgst": false, 00:20:03.121 "ddgst": false 00:20:03.121 }, 00:20:03.121 "method": "bdev_nvme_attach_controller" 00:20:03.121 },{ 00:20:03.121 "params": { 00:20:03.121 "name": "Nvme9", 00:20:03.121 "trtype": "tcp", 00:20:03.121 "traddr": "10.0.0.2", 00:20:03.121 "adrfam": "ipv4", 00:20:03.121 "trsvcid": "4420", 00:20:03.121 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:03.121 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:03.121 "hdgst": false, 00:20:03.121 "ddgst": false 00:20:03.121 }, 00:20:03.121 "method": "bdev_nvme_attach_controller" 00:20:03.121 },{ 00:20:03.121 "params": { 00:20:03.121 "name": "Nvme10", 00:20:03.121 "trtype": "tcp", 00:20:03.121 "traddr": "10.0.0.2", 00:20:03.121 "adrfam": "ipv4", 00:20:03.121 "trsvcid": "4420", 00:20:03.121 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:03.121 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:03.121 "hdgst": false, 00:20:03.121 "ddgst": false 00:20:03.121 }, 00:20:03.121 "method": "bdev_nvme_attach_controller" 00:20:03.121 }' 00:20:03.121 [2024-12-13 09:31:15.271483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.121 [2024-12-13 09:31:15.312553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.023 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.023 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:05.023 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:05.023 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.023 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:05.023 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.023 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3379130 00:20:05.023 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:05.023 09:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:05.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3379130 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3379059 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:05.960 { 00:20:05.960 "params": { 00:20:05.960 "name": "Nvme$subsystem", 00:20:05.960 "trtype": "$TEST_TRANSPORT", 00:20:05.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.960 "adrfam": "ipv4", 00:20:05.960 "trsvcid": "$NVMF_PORT", 00:20:05.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.960 "hdgst": ${hdgst:-false}, 00:20:05.960 "ddgst": ${ddgst:-false} 00:20:05.960 }, 00:20:05.960 "method": "bdev_nvme_attach_controller" 00:20:05.960 } 00:20:05.960 EOF 00:20:05.960 )") 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:05.960 { 00:20:05.960 "params": { 00:20:05.960 "name": "Nvme$subsystem", 00:20:05.960 "trtype": "$TEST_TRANSPORT", 00:20:05.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.960 "adrfam": "ipv4", 00:20:05.960 "trsvcid": "$NVMF_PORT", 00:20:05.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.960 "hdgst": ${hdgst:-false}, 00:20:05.960 "ddgst": ${ddgst:-false} 00:20:05.960 }, 00:20:05.960 "method": "bdev_nvme_attach_controller" 00:20:05.960 } 00:20:05.960 EOF 00:20:05.960 )") 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:05.960 { 00:20:05.960 "params": { 00:20:05.960 "name": "Nvme$subsystem", 00:20:05.960 "trtype": "$TEST_TRANSPORT", 00:20:05.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.960 "adrfam": "ipv4", 00:20:05.960 "trsvcid": "$NVMF_PORT", 00:20:05.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.960 "hdgst": ${hdgst:-false}, 00:20:05.960 "ddgst": ${ddgst:-false} 00:20:05.960 }, 00:20:05.960 "method": "bdev_nvme_attach_controller" 00:20:05.960 } 00:20:05.960 EOF 00:20:05.960 )") 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:05.960 { 00:20:05.960 "params": { 00:20:05.960 "name": "Nvme$subsystem", 00:20:05.960 "trtype": "$TEST_TRANSPORT", 00:20:05.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.960 "adrfam": "ipv4", 00:20:05.960 "trsvcid": "$NVMF_PORT", 00:20:05.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.960 "hdgst": ${hdgst:-false}, 00:20:05.960 "ddgst": ${ddgst:-false} 00:20:05.960 }, 00:20:05.960 "method": "bdev_nvme_attach_controller" 00:20:05.960 } 00:20:05.960 EOF 00:20:05.960 )") 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:05.960 { 00:20:05.960 "params": { 00:20:05.960 "name": "Nvme$subsystem", 00:20:05.960 "trtype": "$TEST_TRANSPORT", 00:20:05.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.960 "adrfam": "ipv4", 00:20:05.960 "trsvcid": "$NVMF_PORT", 00:20:05.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.960 "hdgst": ${hdgst:-false}, 00:20:05.960 "ddgst": ${ddgst:-false} 00:20:05.960 }, 00:20:05.960 "method": "bdev_nvme_attach_controller" 00:20:05.960 } 00:20:05.960 EOF 00:20:05.960 )") 00:20:05.960 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:05.961 { 00:20:05.961 "params": { 00:20:05.961 "name": "Nvme$subsystem", 00:20:05.961 "trtype": "$TEST_TRANSPORT", 00:20:05.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.961 "adrfam": "ipv4", 00:20:05.961 "trsvcid": "$NVMF_PORT", 00:20:05.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.961 "hdgst": ${hdgst:-false}, 00:20:05.961 "ddgst": ${ddgst:-false} 00:20:05.961 }, 00:20:05.961 "method": "bdev_nvme_attach_controller" 00:20:05.961 } 00:20:05.961 EOF 00:20:05.961 )") 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:05.961 { 00:20:05.961 "params": { 00:20:05.961 "name": "Nvme$subsystem", 00:20:05.961 "trtype": "$TEST_TRANSPORT", 00:20:05.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.961 "adrfam": "ipv4", 00:20:05.961 "trsvcid": "$NVMF_PORT", 00:20:05.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.961 "hdgst": ${hdgst:-false}, 00:20:05.961 "ddgst": ${ddgst:-false} 00:20:05.961 }, 00:20:05.961 "method": "bdev_nvme_attach_controller" 00:20:05.961 } 00:20:05.961 EOF 00:20:05.961 )") 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:05.961 [2024-12-13 09:31:18.153098] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:05.961 [2024-12-13 09:31:18.153151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3379712 ] 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:05.961 { 00:20:05.961 "params": { 00:20:05.961 "name": "Nvme$subsystem", 00:20:05.961 "trtype": "$TEST_TRANSPORT", 00:20:05.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.961 "adrfam": "ipv4", 00:20:05.961 "trsvcid": "$NVMF_PORT", 00:20:05.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.961 "hdgst": ${hdgst:-false}, 00:20:05.961 "ddgst": ${ddgst:-false} 00:20:05.961 }, 00:20:05.961 "method": "bdev_nvme_attach_controller" 00:20:05.961 } 00:20:05.961 EOF 00:20:05.961 )") 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:05.961 { 00:20:05.961 "params": { 00:20:05.961 "name": "Nvme$subsystem", 00:20:05.961 "trtype": "$TEST_TRANSPORT", 00:20:05.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.961 "adrfam": "ipv4", 00:20:05.961 "trsvcid": "$NVMF_PORT", 00:20:05.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.961 "hdgst": ${hdgst:-false}, 00:20:05.961 "ddgst": ${ddgst:-false} 00:20:05.961 }, 00:20:05.961 "method": "bdev_nvme_attach_controller" 00:20:05.961 } 00:20:05.961 EOF 00:20:05.961 )") 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:05.961 { 00:20:05.961 "params": { 00:20:05.961 "name": "Nvme$subsystem", 00:20:05.961 "trtype": "$TEST_TRANSPORT", 00:20:05.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.961 "adrfam": "ipv4", 00:20:05.961 "trsvcid": "$NVMF_PORT", 00:20:05.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.961 "hdgst": ${hdgst:-false}, 00:20:05.961 "ddgst": ${ddgst:-false} 00:20:05.961 }, 00:20:05.961 "method": "bdev_nvme_attach_controller" 00:20:05.961 } 00:20:05.961 EOF 00:20:05.961 )") 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:05.961 09:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:05.961 "params": { 00:20:05.961 "name": "Nvme1", 00:20:05.961 "trtype": "tcp", 00:20:05.961 "traddr": "10.0.0.2", 00:20:05.961 "adrfam": "ipv4", 00:20:05.961 "trsvcid": "4420", 00:20:05.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:05.961 "hdgst": false, 00:20:05.961 "ddgst": false 00:20:05.961 }, 00:20:05.961 "method": "bdev_nvme_attach_controller" 00:20:05.961 },{ 00:20:05.961 "params": { 00:20:05.961 "name": "Nvme2", 00:20:05.961 "trtype": "tcp", 00:20:05.961 "traddr": "10.0.0.2", 00:20:05.961 "adrfam": "ipv4", 00:20:05.961 "trsvcid": "4420", 00:20:05.961 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:05.961 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:05.961 "hdgst": false, 00:20:05.961 "ddgst": false 00:20:05.961 }, 00:20:05.961 "method": "bdev_nvme_attach_controller" 00:20:05.961 },{ 00:20:05.961 "params": { 00:20:05.961 "name": "Nvme3", 00:20:05.961 "trtype": "tcp", 00:20:05.961 "traddr": "10.0.0.2", 00:20:05.961 "adrfam": "ipv4", 00:20:05.961 "trsvcid": "4420", 00:20:05.961 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:05.961 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:05.961 "hdgst": false, 00:20:05.961 "ddgst": false 00:20:05.961 }, 00:20:05.961 "method": "bdev_nvme_attach_controller" 00:20:05.961 },{ 00:20:05.961 "params": { 00:20:05.961 "name": "Nvme4", 00:20:05.961 "trtype": "tcp", 00:20:05.961 "traddr": "10.0.0.2", 00:20:05.961 "adrfam": "ipv4", 00:20:05.961 "trsvcid": "4420", 00:20:05.961 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:05.961 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:05.961 "hdgst": false, 00:20:05.961 "ddgst": false 00:20:05.961 }, 00:20:05.961 "method": "bdev_nvme_attach_controller" 00:20:05.961 },{ 00:20:05.961 "params": { 00:20:05.961 "name": "Nvme5", 00:20:05.961 "trtype": "tcp", 00:20:05.961 "traddr": "10.0.0.2", 00:20:05.961 "adrfam": "ipv4", 00:20:05.961 "trsvcid": "4420", 00:20:05.961 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:05.961 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:05.961 "hdgst": false, 00:20:05.961 "ddgst": false 00:20:05.961 }, 00:20:05.961 "method": "bdev_nvme_attach_controller" 00:20:05.961 },{ 00:20:05.961 "params": { 00:20:05.961 "name": "Nvme6", 00:20:05.961 "trtype": "tcp", 00:20:05.961 "traddr": "10.0.0.2", 00:20:05.961 "adrfam": "ipv4", 00:20:05.961 "trsvcid": "4420", 00:20:05.961 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:05.961 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:05.961 "hdgst": false, 00:20:05.961 "ddgst": false 00:20:05.961 }, 00:20:05.961 "method": "bdev_nvme_attach_controller" 00:20:05.961 },{ 00:20:05.961 "params": { 00:20:05.961 "name": "Nvme7", 00:20:05.961 "trtype": "tcp", 00:20:05.961 "traddr": "10.0.0.2", 00:20:05.961 "adrfam": "ipv4", 00:20:05.961 "trsvcid": "4420", 00:20:05.961 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:05.961 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:05.961 "hdgst": false, 00:20:05.961 "ddgst": false 00:20:05.961 }, 00:20:05.961 "method": "bdev_nvme_attach_controller" 00:20:05.961 },{ 00:20:05.961 "params": { 00:20:05.961 "name": "Nvme8", 00:20:05.961 "trtype": "tcp", 00:20:05.961 "traddr": "10.0.0.2", 00:20:05.961 "adrfam": "ipv4", 00:20:05.961 "trsvcid": "4420", 00:20:05.961 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:05.961 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:05.961 "hdgst": false, 00:20:05.961 "ddgst": false 00:20:05.961 }, 00:20:05.961 "method": "bdev_nvme_attach_controller" 00:20:05.961 },{ 00:20:05.961 "params": { 00:20:05.961 "name": "Nvme9", 00:20:05.961 "trtype": "tcp", 00:20:05.961 "traddr": "10.0.0.2", 00:20:05.961 "adrfam": "ipv4", 00:20:05.961 "trsvcid": "4420", 00:20:05.961 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:05.961 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:05.961 "hdgst": false, 00:20:05.961 "ddgst": false 00:20:05.961 }, 00:20:05.961 "method": "bdev_nvme_attach_controller" 00:20:05.961 },{ 00:20:05.961 "params": { 00:20:05.961 "name": "Nvme10", 00:20:05.961 "trtype": "tcp", 00:20:05.961 "traddr": "10.0.0.2", 00:20:05.961 "adrfam": "ipv4", 00:20:05.961 "trsvcid": "4420", 00:20:05.961 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:05.961 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:05.961 "hdgst": false, 00:20:05.961 "ddgst": false 00:20:05.961 }, 00:20:05.961 "method": "bdev_nvme_attach_controller" 00:20:05.961 }' 00:20:05.961 [2024-12-13 09:31:18.221130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.961 [2024-12-13 09:31:18.261760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.338 Running I/O for 1 seconds... 00:20:08.716 2255.00 IOPS, 140.94 MiB/s 00:20:08.716 Latency(us) 00:20:08.716 [2024-12-13T08:31:21.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.716 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.716 Verification LBA range: start 0x0 length 0x400 00:20:08.716 Nvme1n1 : 1.16 276.17 17.26 0.00 0.00 228105.07 15978.30 215707.06 00:20:08.716 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.716 Verification LBA range: start 0x0 length 0x400 00:20:08.716 Nvme2n1 : 1.15 283.37 17.71 0.00 0.00 219367.96 8488.47 206719.27 00:20:08.716 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.716 Verification LBA range: start 0x0 length 0x400 00:20:08.716 Nvme3n1 : 1.16 274.90 17.18 0.00 0.00 224564.27 15603.81 215707.06 00:20:08.716 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.716 Verification LBA range: start 0x0 length 0x400 00:20:08.716 Nvme4n1 : 1.17 274.09 17.13 0.00 0.00 221079.80 5492.54 231685.36 00:20:08.716 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.716 Verification LBA range: start 0x0 length 0x400 00:20:08.716 Nvme5n1 : 1.17 272.76 17.05 0.00 0.00 220200.96 18100.42 221698.93 00:20:08.716 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.716 Verification LBA range: start 0x0 length 0x400 00:20:08.716 Nvme6n1 : 1.18 272.30 17.02 0.00 0.00 217559.43 16727.28 222697.57 00:20:08.716 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.716 Verification LBA range: start 0x0 length 0x400 00:20:08.716 Nvme7n1 : 1.14 286.23 17.89 0.00 0.00 197950.69 15915.89 206719.27 00:20:08.716 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.716 Verification LBA range: start 0x0 length 0x400 00:20:08.716 Nvme8n1 : 1.15 277.38 17.34 0.00 0.00 207084.79 15291.73 221698.93 00:20:08.716 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.716 Verification LBA range: start 0x0 length 0x400 00:20:08.716 Nvme9n1 : 1.18 271.70 16.98 0.00 0.00 208368.98 17725.93 220700.28 00:20:08.716 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.716 Verification LBA range: start 0x0 length 0x400 00:20:08.716 Nvme10n1 : 1.18 271.05 16.94 0.00 0.00 206387.40 14917.24 236678.58 00:20:08.716 [2024-12-13T08:31:21.082Z] =================================================================================================================== 00:20:08.716 [2024-12-13T08:31:21.082Z] Total : 2759.96 172.50 0.00 0.00 215036.33 5492.54 236678.58 00:20:08.716 09:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:08.716 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:08.716 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:08.716 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:08.716 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:08.716 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:08.716 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:08.716 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:08.716 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:08.716 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:08.716 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:08.716 rmmod nvme_tcp 00:20:08.716 rmmod nvme_fabrics 00:20:08.716 rmmod nvme_keyring 00:20:08.716 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:08.975 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:08.975 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:08.975 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3379059 ']' 00:20:08.975 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3379059 00:20:08.976 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3379059 ']' 00:20:08.976 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3379059 00:20:08.976 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:08.976 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.976 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3379059 00:20:08.976 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:08.976 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:08.976 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3379059' 00:20:08.976 killing process with pid 3379059 00:20:08.976 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3379059 00:20:08.976 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3379059 00:20:09.235 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:09.235 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:09.235 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:09.235 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:09.235 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:09.235 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:09.235 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:09.235 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:09.235 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:09.235 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.235 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.235 09:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:11.771 00:20:11.771 real 0m15.046s 00:20:11.771 user 0m33.945s 00:20:11.771 sys 0m5.609s 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:11.771 ************************************ 00:20:11.771 END TEST nvmf_shutdown_tc1 00:20:11.771 ************************************ 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:11.771 ************************************ 00:20:11.771 START TEST nvmf_shutdown_tc2 00:20:11.771 ************************************ 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.771 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:11.772 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:11.772 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:11.772 Found net devices under 0000:af:00.0: cvl_0_0 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:11.772 Found net devices under 0000:af:00.1: cvl_0_1 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:11.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:20:11.772 00:20:11.772 --- 10.0.0.2 ping statistics --- 00:20:11.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.772 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:20:11.772 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:20:11.772 00:20:11.772 --- 10.0.0.1 ping statistics --- 00:20:11.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.773 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3380810 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3380810 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3380810 ']' 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.773 09:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:11.773 [2024-12-13 09:31:24.027034] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:11.773 [2024-12-13 09:31:24.027082] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.773 [2024-12-13 09:31:24.091616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:11.773 [2024-12-13 09:31:24.132597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.773 [2024-12-13 09:31:24.132630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.773 [2024-12-13 09:31:24.132637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.773 [2024-12-13 09:31:24.132644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.773 [2024-12-13 09:31:24.132649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.773 [2024-12-13 09:31:24.134123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.773 [2024-12-13 09:31:24.134196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.773 [2024-12-13 09:31:24.134323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.773 [2024-12-13 09:31:24.134324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:12.032 [2024-12-13 09:31:24.279159] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.032 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:12.032 Malloc1 00:20:12.033 [2024-12-13 09:31:24.397848] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.291 Malloc2 00:20:12.291 Malloc3 00:20:12.291 Malloc4 00:20:12.291 Malloc5 00:20:12.291 Malloc6 00:20:12.291 Malloc7 00:20:12.551 Malloc8 00:20:12.551 Malloc9 00:20:12.551 Malloc10 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3380877 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3380877 /var/tmp/bdevperf.sock 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3380877 ']' 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:12.551 { 00:20:12.551 "params": { 00:20:12.551 "name": "Nvme$subsystem", 00:20:12.551 "trtype": "$TEST_TRANSPORT", 00:20:12.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.551 "adrfam": "ipv4", 00:20:12.551 "trsvcid": "$NVMF_PORT", 00:20:12.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.551 "hdgst": ${hdgst:-false}, 00:20:12.551 "ddgst": ${ddgst:-false} 00:20:12.551 }, 00:20:12.551 "method": "bdev_nvme_attach_controller" 00:20:12.551 } 00:20:12.551 EOF 00:20:12.551 )") 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:12.551 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:12.551 { 00:20:12.551 "params": { 00:20:12.551 "name": "Nvme$subsystem", 00:20:12.551 "trtype": "$TEST_TRANSPORT", 00:20:12.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.551 "adrfam": "ipv4", 00:20:12.551 "trsvcid": "$NVMF_PORT", 00:20:12.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.551 "hdgst": ${hdgst:-false}, 00:20:12.551 "ddgst": ${ddgst:-false} 00:20:12.551 }, 00:20:12.551 "method": "bdev_nvme_attach_controller" 00:20:12.552 } 00:20:12.552 EOF 00:20:12.552 )") 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:12.552 { 00:20:12.552 "params": { 00:20:12.552 "name": "Nvme$subsystem", 00:20:12.552 "trtype": "$TEST_TRANSPORT", 00:20:12.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.552 "adrfam": "ipv4", 00:20:12.552 "trsvcid": "$NVMF_PORT", 00:20:12.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.552 "hdgst": ${hdgst:-false}, 00:20:12.552 "ddgst": ${ddgst:-false} 00:20:12.552 }, 00:20:12.552 "method": "bdev_nvme_attach_controller" 00:20:12.552 } 00:20:12.552 EOF 00:20:12.552 )") 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:12.552 { 00:20:12.552 "params": { 00:20:12.552 "name": "Nvme$subsystem", 00:20:12.552 "trtype": "$TEST_TRANSPORT", 00:20:12.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.552 "adrfam": "ipv4", 00:20:12.552 "trsvcid": "$NVMF_PORT", 00:20:12.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.552 "hdgst": ${hdgst:-false}, 00:20:12.552 "ddgst": ${ddgst:-false} 00:20:12.552 }, 00:20:12.552 "method": "bdev_nvme_attach_controller" 00:20:12.552 } 00:20:12.552 EOF 00:20:12.552 )") 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:12.552 { 00:20:12.552 "params": { 00:20:12.552 "name": "Nvme$subsystem", 00:20:12.552 "trtype": "$TEST_TRANSPORT", 00:20:12.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.552 "adrfam": "ipv4", 00:20:12.552 "trsvcid": "$NVMF_PORT", 00:20:12.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.552 "hdgst": ${hdgst:-false}, 00:20:12.552 "ddgst": ${ddgst:-false} 00:20:12.552 }, 00:20:12.552 "method": "bdev_nvme_attach_controller" 00:20:12.552 } 00:20:12.552 EOF 00:20:12.552 )") 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:12.552 { 00:20:12.552 "params": { 00:20:12.552 "name": "Nvme$subsystem", 00:20:12.552 "trtype": "$TEST_TRANSPORT", 00:20:12.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.552 "adrfam": "ipv4", 00:20:12.552 "trsvcid": "$NVMF_PORT", 00:20:12.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.552 "hdgst": ${hdgst:-false}, 00:20:12.552 "ddgst": ${ddgst:-false} 00:20:12.552 }, 00:20:12.552 "method": "bdev_nvme_attach_controller" 00:20:12.552 } 00:20:12.552 EOF 00:20:12.552 )") 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:12.552 { 00:20:12.552 "params": { 00:20:12.552 "name": "Nvme$subsystem", 00:20:12.552 "trtype": "$TEST_TRANSPORT", 00:20:12.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.552 "adrfam": "ipv4", 00:20:12.552 "trsvcid": "$NVMF_PORT", 00:20:12.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.552 "hdgst": ${hdgst:-false}, 00:20:12.552 "ddgst": ${ddgst:-false} 00:20:12.552 }, 00:20:12.552 "method": "bdev_nvme_attach_controller" 00:20:12.552 } 00:20:12.552 EOF 00:20:12.552 )") 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:12.552 [2024-12-13 09:31:24.876397] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:12.552 [2024-12-13 09:31:24.876464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380877 ] 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:12.552 { 00:20:12.552 "params": { 00:20:12.552 "name": "Nvme$subsystem", 00:20:12.552 "trtype": "$TEST_TRANSPORT", 00:20:12.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.552 "adrfam": "ipv4", 00:20:12.552 "trsvcid": "$NVMF_PORT", 00:20:12.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.552 "hdgst": ${hdgst:-false}, 00:20:12.552 "ddgst": ${ddgst:-false} 00:20:12.552 }, 00:20:12.552 "method": "bdev_nvme_attach_controller" 00:20:12.552 } 00:20:12.552 EOF 00:20:12.552 )") 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:12.552 { 00:20:12.552 "params": { 00:20:12.552 "name": "Nvme$subsystem", 00:20:12.552 "trtype": "$TEST_TRANSPORT", 00:20:12.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.552 "adrfam": "ipv4", 00:20:12.552 "trsvcid": "$NVMF_PORT", 00:20:12.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.552 "hdgst": ${hdgst:-false}, 00:20:12.552 "ddgst": ${ddgst:-false} 00:20:12.552 }, 00:20:12.552 "method": "bdev_nvme_attach_controller" 00:20:12.552 } 00:20:12.552 EOF 00:20:12.552 )") 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:12.552 { 00:20:12.552 "params": { 00:20:12.552 "name": "Nvme$subsystem", 00:20:12.552 "trtype": "$TEST_TRANSPORT", 00:20:12.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.552 "adrfam": "ipv4", 00:20:12.552 "trsvcid": "$NVMF_PORT", 00:20:12.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.552 "hdgst": ${hdgst:-false}, 00:20:12.552 "ddgst": ${ddgst:-false} 00:20:12.552 }, 00:20:12.552 "method": "bdev_nvme_attach_controller" 00:20:12.552 } 00:20:12.552 EOF 00:20:12.552 )") 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:12.552 09:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:12.552 "params": { 00:20:12.552 "name": "Nvme1", 00:20:12.552 "trtype": "tcp", 00:20:12.552 "traddr": "10.0.0.2", 00:20:12.552 "adrfam": "ipv4", 00:20:12.552 "trsvcid": "4420", 00:20:12.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:12.552 "hdgst": false, 00:20:12.552 "ddgst": false 00:20:12.552 }, 00:20:12.552 "method": "bdev_nvme_attach_controller" 00:20:12.552 },{ 00:20:12.552 "params": { 00:20:12.552 "name": "Nvme2", 00:20:12.552 "trtype": "tcp", 00:20:12.552 "traddr": "10.0.0.2", 00:20:12.552 "adrfam": "ipv4", 00:20:12.552 "trsvcid": "4420", 00:20:12.552 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:12.552 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:12.552 "hdgst": false, 00:20:12.552 "ddgst": false 00:20:12.552 }, 00:20:12.552 "method": "bdev_nvme_attach_controller" 00:20:12.552 },{ 00:20:12.552 "params": { 00:20:12.552 "name": "Nvme3", 00:20:12.552 "trtype": "tcp", 00:20:12.552 "traddr": "10.0.0.2", 00:20:12.552 "adrfam": "ipv4", 00:20:12.552 "trsvcid": "4420", 00:20:12.552 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:12.552 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:12.552 "hdgst": false, 00:20:12.553 "ddgst": false 00:20:12.553 }, 00:20:12.553 "method": "bdev_nvme_attach_controller" 00:20:12.553 },{ 00:20:12.553 "params": { 00:20:12.553 "name": "Nvme4", 00:20:12.553 "trtype": "tcp", 00:20:12.553 "traddr": "10.0.0.2", 00:20:12.553 "adrfam": "ipv4", 00:20:12.553 "trsvcid": "4420", 00:20:12.553 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:12.553 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:12.553 "hdgst": false, 00:20:12.553 "ddgst": false 00:20:12.553 }, 00:20:12.553 "method": "bdev_nvme_attach_controller" 00:20:12.553 },{ 00:20:12.553 "params": { 00:20:12.553 "name": "Nvme5", 00:20:12.553 "trtype": "tcp", 00:20:12.553 "traddr": "10.0.0.2", 00:20:12.553 "adrfam": "ipv4", 00:20:12.553 "trsvcid": "4420", 00:20:12.553 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:12.553 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:12.553 "hdgst": false, 00:20:12.553 "ddgst": false 00:20:12.553 }, 00:20:12.553 "method": "bdev_nvme_attach_controller" 00:20:12.553 },{ 00:20:12.553 "params": { 00:20:12.553 "name": "Nvme6", 00:20:12.553 "trtype": "tcp", 00:20:12.553 "traddr": "10.0.0.2", 00:20:12.553 "adrfam": "ipv4", 00:20:12.553 "trsvcid": "4420", 00:20:12.553 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:12.553 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:12.553 "hdgst": false, 00:20:12.553 "ddgst": false 00:20:12.553 }, 00:20:12.553 "method": "bdev_nvme_attach_controller" 00:20:12.553 },{ 00:20:12.553 "params": { 00:20:12.553 "name": "Nvme7", 00:20:12.553 "trtype": "tcp", 00:20:12.553 "traddr": "10.0.0.2", 00:20:12.553 "adrfam": "ipv4", 00:20:12.553 "trsvcid": "4420", 00:20:12.553 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:12.553 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:12.553 "hdgst": false, 00:20:12.553 "ddgst": false 00:20:12.553 }, 00:20:12.553 "method": "bdev_nvme_attach_controller" 00:20:12.553 },{ 00:20:12.553 "params": { 00:20:12.553 "name": "Nvme8", 00:20:12.553 "trtype": "tcp", 00:20:12.553 "traddr": "10.0.0.2", 00:20:12.553 "adrfam": "ipv4", 00:20:12.553 "trsvcid": "4420", 00:20:12.553 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:12.553 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:12.553 "hdgst": false, 00:20:12.553 "ddgst": false 00:20:12.553 }, 00:20:12.553 "method": "bdev_nvme_attach_controller" 00:20:12.553 },{ 00:20:12.553 "params": { 00:20:12.553 "name": "Nvme9", 00:20:12.553 "trtype": "tcp", 00:20:12.553 "traddr": "10.0.0.2", 00:20:12.553 "adrfam": "ipv4", 00:20:12.553 "trsvcid": "4420", 00:20:12.553 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:12.553 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:12.553 "hdgst": false, 00:20:12.553 "ddgst": false 00:20:12.553 }, 00:20:12.553 "method": "bdev_nvme_attach_controller" 00:20:12.553 },{ 00:20:12.553 "params": { 00:20:12.553 "name": "Nvme10", 00:20:12.553 "trtype": "tcp", 00:20:12.553 "traddr": "10.0.0.2", 00:20:12.553 "adrfam": "ipv4", 00:20:12.553 "trsvcid": "4420", 00:20:12.553 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:12.553 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:12.553 "hdgst": false, 00:20:12.553 "ddgst": false 00:20:12.553 }, 00:20:12.553 "method": "bdev_nvme_attach_controller" 00:20:12.553 }' 00:20:12.812 [2024-12-13 09:31:24.945950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.812 [2024-12-13 09:31:24.988748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.185 Running I/O for 10 seconds... 00:20:14.445 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.445 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:14.445 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:14.445 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.445 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3380877 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3380877 ']' 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3380877 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3380877 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3380877' 00:20:14.705 killing process with pid 3380877 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3380877 00:20:14.705 09:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3380877 00:20:14.705 Received shutdown signal, test time was about 0.660053 seconds 00:20:14.705 00:20:14.705 Latency(us) 00:20:14.705 [2024-12-13T08:31:27.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.705 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.705 Verification LBA range: start 0x0 length 0x400 00:20:14.705 Nvme1n1 : 0.64 298.60 18.66 0.00 0.00 210508.72 19473.55 206719.27 00:20:14.705 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.705 Verification LBA range: start 0x0 length 0x400 00:20:14.705 Nvme2n1 : 0.65 294.09 18.38 0.00 0.00 208133.53 15478.98 212711.13 00:20:14.705 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.705 Verification LBA range: start 0x0 length 0x400 00:20:14.705 Nvme3n1 : 0.64 300.24 18.76 0.00 0.00 199325.01 14605.17 213709.78 00:20:14.705 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.705 Verification LBA range: start 0x0 length 0x400 00:20:14.705 Nvme4n1 : 0.64 301.21 18.83 0.00 0.00 193404.99 23343.30 205720.62 00:20:14.705 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.705 Verification LBA range: start 0x0 length 0x400 00:20:14.705 Nvme5n1 : 0.66 291.48 18.22 0.00 0.00 195269.97 19723.22 198730.12 00:20:14.705 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.705 Verification LBA range: start 0x0 length 0x400 00:20:14.705 Nvme6n1 : 0.66 291.17 18.20 0.00 0.00 190496.51 17476.27 220700.28 00:20:14.705 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.705 Verification LBA range: start 0x0 length 0x400 00:20:14.705 Nvme7n1 : 0.65 296.79 18.55 0.00 0.00 181377.71 28960.67 180754.53 00:20:14.705 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.705 Verification LBA range: start 0x0 length 0x400 00:20:14.705 Nvme8n1 : 0.65 295.41 18.46 0.00 0.00 177212.79 18100.42 210713.84 00:20:14.705 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.705 Verification LBA range: start 0x0 length 0x400 00:20:14.705 Nvme9n1 : 0.63 203.54 12.72 0.00 0.00 247860.66 34453.21 219701.64 00:20:14.705 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.705 Verification LBA range: start 0x0 length 0x400 00:20:14.705 Nvme10n1 : 0.63 204.71 12.79 0.00 0.00 238111.21 16352.79 231685.36 00:20:14.705 [2024-12-13T08:31:27.071Z] =================================================================================================================== 00:20:14.705 [2024-12-13T08:31:27.071Z] Total : 2777.24 173.58 0.00 0.00 201397.55 14605.17 231685.36 00:20:14.963 09:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:15.902 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3380810 00:20:15.902 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:15.902 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:15.902 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:15.902 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:15.902 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:15.902 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:15.902 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:15.902 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:15.902 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:15.902 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:15.902 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:15.902 rmmod nvme_tcp 00:20:15.902 rmmod nvme_fabrics 00:20:15.902 rmmod nvme_keyring 00:20:15.902 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:16.231 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:16.231 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:16.231 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3380810 ']' 00:20:16.231 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3380810 00:20:16.231 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3380810 ']' 00:20:16.231 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3380810 00:20:16.231 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:16.231 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.231 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3380810 00:20:16.231 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:16.231 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:16.231 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3380810' 00:20:16.231 killing process with pid 3380810 00:20:16.231 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3380810 00:20:16.231 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3380810 00:20:16.644 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:16.644 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:16.644 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:16.644 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:16.644 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:16.644 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:16.644 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:16.644 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:16.644 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:16.645 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.645 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.645 09:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.550 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:18.550 00:20:18.550 real 0m7.086s 00:20:18.550 user 0m20.383s 00:20:18.550 sys 0m1.241s 00:20:18.550 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.550 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:18.550 ************************************ 00:20:18.550 END TEST nvmf_shutdown_tc2 00:20:18.551 ************************************ 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:18.551 ************************************ 00:20:18.551 START TEST nvmf_shutdown_tc3 00:20:18.551 ************************************ 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:18.551 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:18.551 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:18.551 Found net devices under 0000:af:00.0: cvl_0_0 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:18.551 Found net devices under 0000:af:00.1: cvl_0_1 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.551 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.552 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:18.552 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.552 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.552 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:18.552 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:18.552 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.552 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.552 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:18.552 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:18.552 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.552 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.809 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.809 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.809 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:18.809 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.809 09:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.809 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.809 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:18.809 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:18.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:20:18.810 00:20:18.810 --- 10.0.0.2 ping statistics --- 00:20:18.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.810 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:20:18.810 00:20:18.810 --- 10.0.0.1 ping statistics --- 00:20:18.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.810 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3382101 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3382101 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3382101 ']' 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:18.810 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:18.810 [2024-12-13 09:31:31.108409] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:18.810 [2024-12-13 09:31:31.108459] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.810 [2024-12-13 09:31:31.176017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.069 [2024-12-13 09:31:31.216689] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.069 [2024-12-13 09:31:31.216725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.069 [2024-12-13 09:31:31.216732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.069 [2024-12-13 09:31:31.216738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.069 [2024-12-13 09:31:31.216742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.069 [2024-12-13 09:31:31.218078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.069 [2024-12-13 09:31:31.218167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.069 [2024-12-13 09:31:31.218275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.069 [2024-12-13 09:31:31.218276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:19.069 [2024-12-13 09:31:31.355641] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.069 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:19.328 Malloc1 00:20:19.328 [2024-12-13 09:31:31.471554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.328 Malloc2 00:20:19.328 Malloc3 00:20:19.328 Malloc4 00:20:19.328 Malloc5 00:20:19.328 Malloc6 00:20:19.587 Malloc7 00:20:19.587 Malloc8 00:20:19.587 Malloc9 00:20:19.587 Malloc10 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3382218 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3382218 /var/tmp/bdevperf.sock 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3382218 ']' 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:19.587 { 00:20:19.587 "params": { 00:20:19.587 "name": "Nvme$subsystem", 00:20:19.587 "trtype": "$TEST_TRANSPORT", 00:20:19.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.587 "adrfam": "ipv4", 00:20:19.587 "trsvcid": "$NVMF_PORT", 00:20:19.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.587 "hdgst": ${hdgst:-false}, 00:20:19.587 "ddgst": ${ddgst:-false} 00:20:19.587 }, 00:20:19.587 "method": "bdev_nvme_attach_controller" 00:20:19.587 } 00:20:19.587 EOF 00:20:19.587 )") 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:19.587 { 00:20:19.587 "params": { 00:20:19.587 "name": "Nvme$subsystem", 00:20:19.587 "trtype": "$TEST_TRANSPORT", 00:20:19.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.587 "adrfam": "ipv4", 00:20:19.587 "trsvcid": "$NVMF_PORT", 00:20:19.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.587 "hdgst": ${hdgst:-false}, 00:20:19.587 "ddgst": ${ddgst:-false} 00:20:19.587 }, 00:20:19.587 "method": "bdev_nvme_attach_controller" 00:20:19.587 } 00:20:19.587 EOF 00:20:19.587 )") 00:20:19.587 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:19.588 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:19.588 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:19.588 { 00:20:19.588 "params": { 00:20:19.588 "name": "Nvme$subsystem", 00:20:19.588 "trtype": "$TEST_TRANSPORT", 00:20:19.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.588 "adrfam": "ipv4", 00:20:19.588 "trsvcid": "$NVMF_PORT", 00:20:19.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.588 "hdgst": ${hdgst:-false}, 00:20:19.588 "ddgst": ${ddgst:-false} 00:20:19.588 }, 00:20:19.588 "method": "bdev_nvme_attach_controller" 00:20:19.588 } 00:20:19.588 EOF 00:20:19.588 )") 00:20:19.588 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:19.588 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:19.588 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:19.588 { 00:20:19.588 "params": { 00:20:19.588 "name": "Nvme$subsystem", 00:20:19.588 "trtype": "$TEST_TRANSPORT", 00:20:19.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.588 "adrfam": "ipv4", 00:20:19.588 "trsvcid": "$NVMF_PORT", 00:20:19.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.588 "hdgst": ${hdgst:-false}, 00:20:19.588 "ddgst": ${ddgst:-false} 00:20:19.588 }, 00:20:19.588 "method": "bdev_nvme_attach_controller" 00:20:19.588 } 00:20:19.588 EOF 00:20:19.588 )") 00:20:19.588 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:19.588 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:19.588 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:19.588 { 00:20:19.588 "params": { 00:20:19.588 "name": "Nvme$subsystem", 00:20:19.588 "trtype": "$TEST_TRANSPORT", 00:20:19.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.588 "adrfam": "ipv4", 00:20:19.588 "trsvcid": "$NVMF_PORT", 00:20:19.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.588 "hdgst": ${hdgst:-false}, 00:20:19.588 "ddgst": ${ddgst:-false} 00:20:19.588 }, 00:20:19.588 "method": "bdev_nvme_attach_controller" 00:20:19.588 } 00:20:19.588 EOF 00:20:19.588 )") 00:20:19.588 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:19.588 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:19.588 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:19.588 { 00:20:19.588 "params": { 00:20:19.588 "name": "Nvme$subsystem", 00:20:19.588 "trtype": "$TEST_TRANSPORT", 00:20:19.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.588 "adrfam": "ipv4", 00:20:19.588 "trsvcid": "$NVMF_PORT", 00:20:19.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.588 "hdgst": ${hdgst:-false}, 00:20:19.588 "ddgst": ${ddgst:-false} 00:20:19.588 }, 00:20:19.588 "method": "bdev_nvme_attach_controller" 00:20:19.588 } 00:20:19.588 EOF 00:20:19.588 )") 00:20:19.588 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:19.588 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:19.588 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:19.588 { 00:20:19.588 "params": { 00:20:19.588 "name": "Nvme$subsystem", 00:20:19.588 "trtype": "$TEST_TRANSPORT", 00:20:19.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.588 "adrfam": "ipv4", 00:20:19.588 "trsvcid": "$NVMF_PORT", 00:20:19.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.588 "hdgst": ${hdgst:-false}, 00:20:19.588 "ddgst": ${ddgst:-false} 00:20:19.588 }, 00:20:19.588 "method": "bdev_nvme_attach_controller" 00:20:19.588 } 00:20:19.588 EOF 00:20:19.588 )") 00:20:19.588 [2024-12-13 09:31:31.950611] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:19.588 [2024-12-13 09:31:31.950662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382218 ] 00:20:19.588 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:19.847 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:19.847 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:19.847 { 00:20:19.847 "params": { 00:20:19.847 "name": "Nvme$subsystem", 00:20:19.847 "trtype": "$TEST_TRANSPORT", 00:20:19.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.847 "adrfam": "ipv4", 00:20:19.847 "trsvcid": "$NVMF_PORT", 00:20:19.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.847 "hdgst": ${hdgst:-false}, 00:20:19.847 "ddgst": ${ddgst:-false} 00:20:19.847 }, 00:20:19.847 "method": "bdev_nvme_attach_controller" 00:20:19.847 } 00:20:19.847 EOF 00:20:19.847 )") 00:20:19.847 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:19.847 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:19.847 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:19.847 { 00:20:19.847 "params": { 00:20:19.847 "name": "Nvme$subsystem", 00:20:19.847 "trtype": "$TEST_TRANSPORT", 00:20:19.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.847 "adrfam": "ipv4", 00:20:19.847 "trsvcid": "$NVMF_PORT", 00:20:19.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.847 "hdgst": ${hdgst:-false}, 00:20:19.847 "ddgst": ${ddgst:-false} 00:20:19.847 }, 00:20:19.847 "method": "bdev_nvme_attach_controller" 00:20:19.847 } 00:20:19.847 EOF 00:20:19.847 )") 00:20:19.847 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:19.847 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:19.847 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:19.847 { 00:20:19.848 "params": { 00:20:19.848 "name": "Nvme$subsystem", 00:20:19.848 "trtype": "$TEST_TRANSPORT", 00:20:19.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.848 "adrfam": "ipv4", 00:20:19.848 "trsvcid": "$NVMF_PORT", 00:20:19.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.848 "hdgst": ${hdgst:-false}, 00:20:19.848 "ddgst": ${ddgst:-false} 00:20:19.848 }, 00:20:19.848 "method": "bdev_nvme_attach_controller" 00:20:19.848 } 00:20:19.848 EOF 00:20:19.848 )") 00:20:19.848 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:19.848 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:19.848 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:19.848 09:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:19.848 "params": { 00:20:19.848 "name": "Nvme1", 00:20:19.848 "trtype": "tcp", 00:20:19.848 "traddr": "10.0.0.2", 00:20:19.848 "adrfam": "ipv4", 00:20:19.848 "trsvcid": "4420", 00:20:19.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.848 "hdgst": false, 00:20:19.848 "ddgst": false 00:20:19.848 }, 00:20:19.848 "method": "bdev_nvme_attach_controller" 00:20:19.848 },{ 00:20:19.848 "params": { 00:20:19.848 "name": "Nvme2", 00:20:19.848 "trtype": "tcp", 00:20:19.848 "traddr": "10.0.0.2", 00:20:19.848 "adrfam": "ipv4", 00:20:19.848 "trsvcid": "4420", 00:20:19.848 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:19.848 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:19.848 "hdgst": false, 00:20:19.848 "ddgst": false 00:20:19.848 }, 00:20:19.848 "method": "bdev_nvme_attach_controller" 00:20:19.848 },{ 00:20:19.848 "params": { 00:20:19.848 "name": "Nvme3", 00:20:19.848 "trtype": "tcp", 00:20:19.848 "traddr": "10.0.0.2", 00:20:19.848 "adrfam": "ipv4", 00:20:19.848 "trsvcid": "4420", 00:20:19.848 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:19.848 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:19.848 "hdgst": false, 00:20:19.848 "ddgst": false 00:20:19.848 }, 00:20:19.848 "method": "bdev_nvme_attach_controller" 00:20:19.848 },{ 00:20:19.848 "params": { 00:20:19.848 "name": "Nvme4", 00:20:19.848 "trtype": "tcp", 00:20:19.848 "traddr": "10.0.0.2", 00:20:19.848 "adrfam": "ipv4", 00:20:19.848 "trsvcid": "4420", 00:20:19.848 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:19.848 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:19.848 "hdgst": false, 00:20:19.848 "ddgst": false 00:20:19.848 }, 00:20:19.848 "method": "bdev_nvme_attach_controller" 00:20:19.848 },{ 00:20:19.848 "params": { 00:20:19.848 "name": "Nvme5", 00:20:19.848 "trtype": "tcp", 00:20:19.848 "traddr": "10.0.0.2", 00:20:19.848 "adrfam": "ipv4", 00:20:19.848 "trsvcid": "4420", 00:20:19.848 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:19.848 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:19.848 "hdgst": false, 00:20:19.848 "ddgst": false 00:20:19.848 }, 00:20:19.848 "method": "bdev_nvme_attach_controller" 00:20:19.848 },{ 00:20:19.848 "params": { 00:20:19.848 "name": "Nvme6", 00:20:19.848 "trtype": "tcp", 00:20:19.848 "traddr": "10.0.0.2", 00:20:19.848 "adrfam": "ipv4", 00:20:19.848 "trsvcid": "4420", 00:20:19.848 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:19.848 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:19.848 "hdgst": false, 00:20:19.848 "ddgst": false 00:20:19.848 }, 00:20:19.848 "method": "bdev_nvme_attach_controller" 00:20:19.848 },{ 00:20:19.848 "params": { 00:20:19.848 "name": "Nvme7", 00:20:19.848 "trtype": "tcp", 00:20:19.848 "traddr": "10.0.0.2", 00:20:19.848 "adrfam": "ipv4", 00:20:19.848 "trsvcid": "4420", 00:20:19.848 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:19.848 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:19.848 "hdgst": false, 00:20:19.848 "ddgst": false 00:20:19.848 }, 00:20:19.848 "method": "bdev_nvme_attach_controller" 00:20:19.848 },{ 00:20:19.848 "params": { 00:20:19.848 "name": "Nvme8", 00:20:19.848 "trtype": "tcp", 00:20:19.848 "traddr": "10.0.0.2", 00:20:19.848 "adrfam": "ipv4", 00:20:19.848 "trsvcid": "4420", 00:20:19.848 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:19.848 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:19.848 "hdgst": false, 00:20:19.848 "ddgst": false 00:20:19.848 }, 00:20:19.848 "method": "bdev_nvme_attach_controller" 00:20:19.848 },{ 00:20:19.848 "params": { 00:20:19.848 "name": "Nvme9", 00:20:19.848 "trtype": "tcp", 00:20:19.848 "traddr": "10.0.0.2", 00:20:19.848 "adrfam": "ipv4", 00:20:19.848 "trsvcid": "4420", 00:20:19.848 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:19.848 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:19.848 "hdgst": false, 00:20:19.848 "ddgst": false 00:20:19.848 }, 00:20:19.848 "method": "bdev_nvme_attach_controller" 00:20:19.848 },{ 00:20:19.848 "params": { 00:20:19.848 "name": "Nvme10", 00:20:19.848 "trtype": "tcp", 00:20:19.848 "traddr": "10.0.0.2", 00:20:19.848 "adrfam": "ipv4", 00:20:19.848 "trsvcid": "4420", 00:20:19.848 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:19.848 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:19.848 "hdgst": false, 00:20:19.848 "ddgst": false 00:20:19.848 }, 00:20:19.848 "method": "bdev_nvme_attach_controller" 00:20:19.848 }' 00:20:19.848 [2024-12-13 09:31:32.018630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.848 [2024-12-13 09:31:32.059387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.751 Running I/O for 10 seconds... 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:21.751 09:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:22.010 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:22.010 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:22.010 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:22.010 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:22.010 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.010 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:22.010 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.010 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:22.010 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:22.010 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=193 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 193 -ge 100 ']' 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3382101 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3382101 ']' 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3382101 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3382101 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3382101' 00:20:22.272 killing process with pid 3382101 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3382101 00:20:22.272 09:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3382101 00:20:22.272 [2024-12-13 09:31:34.630122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.272 [2024-12-13 09:31:34.630320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.630572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11840 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.273 [2024-12-13 09:31:34.632839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.632937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11d10 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.634941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e121e0 is same with the state(6) to be set 00:20:22.274 [2024-12-13 09:31:34.635306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.274 [2024-12-13 09:31:34.635607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.274 [2024-12-13 09:31:34.635614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.635991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.635999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.636005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.636013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.636021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.636029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.636036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.636044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.636050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.636058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1[2024-12-13 09:31:34.636049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 the state(6) to be set 00:20:22.275 [2024-12-13 09:31:34.636069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.636077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with [2024-12-13 09:31:34.636078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1the state(6) to be set 00:20:22.275 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.636086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.275 [2024-12-13 09:31:34.636088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.636094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.275 [2024-12-13 09:31:34.636097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.636101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.275 [2024-12-13 09:31:34.636106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.636108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.275 [2024-12-13 09:31:34.636115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.275 [2024-12-13 09:31:34.636116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.636122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.275 [2024-12-13 09:31:34.636123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.636130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.275 [2024-12-13 09:31:34.636133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.275 [2024-12-13 09:31:34.636137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.275 [2024-12-13 09:31:34.636141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.275 [2024-12-13 09:31:34.636144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.275 [2024-12-13 09:31:34.636151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.276 [2024-12-13 09:31:34.636155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.276 [2024-12-13 09:31:34.636162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.276 [2024-12-13 09:31:34.636168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.276 [2024-12-13 09:31:34.636176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with [2024-12-13 09:31:34.636184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1the state(6) to be set 00:20:22.276 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.276 [2024-12-13 09:31:34.636192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.276 [2024-12-13 09:31:34.636199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.276 [2024-12-13 09:31:34.636205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.276 [2024-12-13 09:31:34.636212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1[2024-12-13 09:31:34.636219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.276 the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with [2024-12-13 09:31:34.636229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:22.276 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.276 [2024-12-13 09:31:34.636237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.276 [2024-12-13 09:31:34.636244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.276 [2024-12-13 09:31:34.636251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.276 [2024-12-13 09:31:34.636259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.276 [2024-12-13 09:31:34.636266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.276 [2024-12-13 09:31:34.636280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.276 [2024-12-13 09:31:34.636287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.276 [2024-12-13 09:31:34.636294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.276 [2024-12-13 09:31:34.636302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1[2024-12-13 09:31:34.636309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.276 the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 09:31:34.636319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.276 the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.276 [2024-12-13 09:31:34.636335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.276 [2024-12-13 09:31:34.636342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.276 [2024-12-13 09:31:34.636349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.276 [2024-12-13 09:31:34.636356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ[2024-12-13 09:31:34.636383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with transport error -6 (No such device or address) on qpair id 1 00:20:22.276 the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.276 [2024-12-13 09:31:34.636404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.549 [2024-12-13 09:31:34.636502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.549 [2024-12-13 09:31:34.636516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.549 [2024-12-13 09:31:34.636522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-13 09:31:34.636531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e126d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.549 the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.549 [2024-12-13 09:31:34.636548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.549 [2024-12-13 09:31:34.636555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.549 [2024-12-13 09:31:34.636561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.549 [2024-12-13 09:31:34.636568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d950 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.549 [2024-12-13 09:31:34.636598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.549 [2024-12-13 09:31:34.636605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.549 [2024-12-13 09:31:34.636612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.549 [2024-12-13 09:31:34.636619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.549 [2024-12-13 09:31:34.636625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.549 [2024-12-13 09:31:34.636632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.549 [2024-12-13 09:31:34.636641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.549 [2024-12-13 09:31:34.636647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11eb4d0 is same with the state(6) to be set 00:20:22.549 [2024-12-13 09:31:34.636671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.549 [2024-12-13 09:31:34.636679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.549 [2024-12-13 09:31:34.636687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.549 [2024-12-13 09:31:34.636694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.549 [2024-12-13 09:31:34.636701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.550 [2024-12-13 09:31:34.636708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.636715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.550 [2024-12-13 09:31:34.636721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.636727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7000 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.636770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.550 [2024-12-13 09:31:34.636780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.636788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.550 [2024-12-13 09:31:34.636794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.636802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.550 [2024-12-13 09:31:34.636808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.636816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.550 [2024-12-13 09:31:34.636822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.636828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7490 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.636856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.550 [2024-12-13 09:31:34.636866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.636873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.550 [2024-12-13 09:31:34.636880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.636887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.550 [2024-12-13 09:31:34.636894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.636901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.550 [2024-12-13 09:31:34.636909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.636915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec950 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-12-13 09:31:34.637081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.637093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-12-13 09:31:34.637100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.637109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-12-13 09:31:34.637115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.637123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-12-13 09:31:34.637130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.637131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with [2024-12-13 09:31:34.637141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:12the state(6) to be set 00:20:22.550 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-12-13 09:31:34.637150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.637154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-12-13 09:31:34.637162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.637170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-12-13 09:31:34.637177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.637185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-12-13 09:31:34.637193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.637201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:12[2024-12-13 09:31:34.637208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.637218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with [2024-12-13 09:31:34.637226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:12the state(6) to be set 00:20:22.550 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-12-13 09:31:34.637235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 09:31:34.637235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-12-13 09:31:34.637253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.637260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with [2024-12-13 09:31:34.637267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:1the state(6) to be set 00:20:22.550 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-12-13 09:31:34.637275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.637282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-12-13 09:31:34.637289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.637297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:1[2024-12-13 09:31:34.637304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 09:31:34.637312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-12-13 09:31:34.637327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.637335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.550 [2024-12-13 09:31:34.637342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.550 [2024-12-13 09:31:34.637349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.550 [2024-12-13 09:31:34.637356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with [2024-12-13 09:31:34.637356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:1the state(6) to be set 00:20:22.550 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with [2024-12-13 09:31:34.637370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:22.551 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:1[2024-12-13 09:31:34.637415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:1[2024-12-13 09:31:34.637458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 09:31:34.637468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with [2024-12-13 09:31:34.637478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:1the state(6) to be set 00:20:22.551 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 09:31:34.637523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:1[2024-12-13 09:31:34.637567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 09:31:34.637576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:1[2024-12-13 09:31:34.637605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 09:31:34.637613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e12ba0 is same with the state(6) to be set 00:20:22.551 [2024-12-13 09:31:34.637648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.551 [2024-12-13 09:31:34.637776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.551 [2024-12-13 09:31:34.637783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.552 [2024-12-13 09:31:34.637790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-12-13 09:31:34.637797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.552 [2024-12-13 09:31:34.637806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-12-13 09:31:34.637812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.552 [2024-12-13 09:31:34.637821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-12-13 09:31:34.637828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.552 [2024-12-13 09:31:34.637836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-12-13 09:31:34.637842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.552 [2024-12-13 09:31:34.637850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-12-13 09:31:34.637856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.552 [2024-12-13 09:31:34.637865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-12-13 09:31:34.637872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.552 [2024-12-13 09:31:34.637880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-12-13 09:31:34.637887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.552 [2024-12-13 09:31:34.637895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-12-13 09:31:34.637901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.552 [2024-12-13 09:31:34.637909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-12-13 09:31:34.637917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.552 [2024-12-13 09:31:34.637925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-12-13 09:31:34.637931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.552 [2024-12-13 09:31:34.637939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.552 [2024-12-13 09:31:34.637945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.552 [2024-12-13 09:31:34.638435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.552 [2024-12-13 09:31:34.638848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13070 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.639997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13540 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.640993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.641010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.641016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.641022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.641028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.641037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.641043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.641049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.641055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.641061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.553 [2024-12-13 09:31:34.641067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.641383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.650791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.554 [2024-12-13 09:31:34.650803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.650813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.554 [2024-12-13 09:31:34.650820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.650830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.554 [2024-12-13 09:31:34.650836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.650845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.554 [2024-12-13 09:31:34.650854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.650862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.554 [2024-12-13 09:31:34.650868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.650877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.554 [2024-12-13 09:31:34.650884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.650894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.554 [2024-12-13 09:31:34.650900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.650909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.554 [2024-12-13 09:31:34.650917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.650925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.554 [2024-12-13 09:31:34.650931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.650940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.554 [2024-12-13 09:31:34.650947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.650955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.554 [2024-12-13 09:31:34.650962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.650970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.554 [2024-12-13 09:31:34.650976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.652646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:22.554 [2024-12-13 09:31:34.652685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164d950 (9): Bad file descriptor 00:20:22.554 [2024-12-13 09:31:34.652727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11eb4d0 (9): Bad file descriptor 00:20:22.554 [2024-12-13 09:31:34.652747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7000 (9): Bad file descriptor 00:20:22.554 [2024-12-13 09:31:34.652781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.652794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.652809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.652818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.652828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.652836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.652847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.652855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.652864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1618290 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.652894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.652905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.652915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.652924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.652933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.652942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.652952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.652961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.652969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1664020 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.653001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.653012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.653022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.653031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.653040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.653048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.653058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.653067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.653076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110c610 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.653097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7490 (9): Bad file descriptor 00:20:22.554 [2024-12-13 09:31:34.653127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.653138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.653148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.653157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.653167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.653175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.653185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.554 [2024-12-13 09:31:34.653194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.554 [2024-12-13 09:31:34.653202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11eb2d0 is same with the state(6) to be set 00:20:22.554 [2024-12-13 09:31:34.653220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec950 (9): Bad file descriptor 00:20:22.554 [2024-12-13 09:31:34.655196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:22.555 [2024-12-13 09:31:34.656101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.555 [2024-12-13 09:31:34.656191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x164d950 wit[2024-12-13 09:31:34.656212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with h addr=10.0.0.2, port=4420 00:20:22.555 the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d950 is same [2024-12-13 09:31:34.656226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with with the state(6) to be set 00:20:22.555 the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.555 [2024-12-13 09:31:34.656390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.656399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13a30 is same with [2024-12-13 09:31:34.656399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f7000 witthe state(6) to be set 00:20:22.555 h addr=10.0.0.2, port=4420 00:20:22.555 [2024-12-13 09:31:34.656412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7000 is same with the state(6) to be set 00:20:22.555 [2024-12-13 09:31:34.657102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164d950 (9): Bad file descriptor 00:20:22.555 [2024-12-13 09:31:34.657130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7000 (9): Bad file descriptor 00:20:22.555 [2024-12-13 09:31:34.657195] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:22.555 [2024-12-13 09:31:34.657257] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:22.555 [2024-12-13 09:31:34.657318] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:22.555 [2024-12-13 09:31:34.657367] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:22.555 [2024-12-13 09:31:34.657419] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:22.555 [2024-12-13 09:31:34.657479] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:22.555 [2024-12-13 09:31:34.657560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:22.555 [2024-12-13 09:31:34.657571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:22.555 [2024-12-13 09:31:34.657583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:22.555 [2024-12-13 09:31:34.657593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:22.555 [2024-12-13 09:31:34.657603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:22.555 [2024-12-13 09:31:34.657611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:22.555 [2024-12-13 09:31:34.657621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:22.555 [2024-12-13 09:31:34.657629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:22.555 [2024-12-13 09:31:34.658158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-12-13 09:31:34.658177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-12-13 09:31:34.658194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-12-13 09:31:34.658204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-12-13 09:31:34.658216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-12-13 09:31:34.658225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-12-13 09:31:34.658237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-12-13 09:31:34.658246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-12-13 09:31:34.658258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-12-13 09:31:34.658267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-12-13 09:31:34.658279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-12-13 09:31:34.658288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-12-13 09:31:34.658299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-12-13 09:31:34.658308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-12-13 09:31:34.658319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-12-13 09:31:34.658332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-12-13 09:31:34.658343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-12-13 09:31:34.658353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-12-13 09:31:34.658363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-12-13 09:31:34.658373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-12-13 09:31:34.658384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-12-13 09:31:34.658393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-12-13 09:31:34.658404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-12-13 09:31:34.658414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-12-13 09:31:34.658425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-12-13 09:31:34.658434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.555 [2024-12-13 09:31:34.658445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.555 [2024-12-13 09:31:34.658461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.658986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.658995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.659006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.659015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.659027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.659036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.659048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.659057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.659069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.659077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.659089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.659098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.659110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.659120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.659131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.659140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.659151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.659161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.659172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.659181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.659192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.659202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.659213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.659221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.659232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.659241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.659252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.659261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.556 [2024-12-13 09:31:34.659288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:22.556 [2024-12-13 09:31:34.659369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.556 [2024-12-13 09:31:34.659381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.659988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.659998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.660009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.660018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.660031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.660040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.660052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.660060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.660072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.660080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.660092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.660101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.660112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.660121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.660132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.660142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.660153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.660162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.660174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.660184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.660195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.660204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.660215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.557 [2024-12-13 09:31:34.660224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.557 [2024-12-13 09:31:34.660236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.660705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.660715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143ae30 is same with the state(6) to be set 00:20:22.558 [2024-12-13 09:31:34.662886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:22.558 [2024-12-13 09:31:34.662908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:22.558 [2024-12-13 09:31:34.662946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1664f10 (9): Bad file descriptor 00:20:22.558 [2024-12-13 09:31:34.662958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1664020 (9): Bad file descriptor 00:20:22.558 [2024-12-13 09:31:34.662984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1618290 (9): Bad file descriptor 00:20:22.558 [2024-12-13 09:31:34.663003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110c610 (9): Bad file descriptor 00:20:22.558 [2024-12-13 09:31:34.663023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11eb2d0 (9): Bad file descriptor 00:20:22.558 [2024-12-13 09:31:34.663144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.663157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.663169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.663179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.663189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.663196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.663205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.663212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.663221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.663230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.663238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.663246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.663254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.663261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.663269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.663276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.663285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.663291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.558 [2024-12-13 09:31:34.663300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.558 [2024-12-13 09:31:34.663306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.559 [2024-12-13 09:31:34.663939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.559 [2024-12-13 09:31:34.663947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.663954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.663963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.663969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.663978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.663984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.663992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.663999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.664007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.664014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.664022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.664029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.664036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.664044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.664052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.664058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.664067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.664074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.664081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.664089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.664097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.664103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.664112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.664122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.664132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.664140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.664147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.664153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.664163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fb3d0 is same with the state(6) to be set 00:20:22.560 [2024-12-13 09:31:34.665159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.560 [2024-12-13 09:31:34.665574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.560 [2024-12-13 09:31:34.665583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.665987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.665995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.666003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.666011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.666019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.666027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.666034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.666043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.666050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.666059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.666066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.666074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.666081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.666090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.666096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.666106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.666114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.666124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.666131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.666140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.666147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.666156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.666163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.666172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.666178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.666187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fc5c0 is same with the state(6) to be set 00:20:22.561 [2024-12-13 09:31:34.667178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.667191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.667201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.667209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.561 [2024-12-13 09:31:34.667218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.561 [2024-12-13 09:31:34.667225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.562 [2024-12-13 09:31:34.667818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.562 [2024-12-13 09:31:34.667827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.667833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.667842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.667849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.667858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.667865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.667873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.667880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.667889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.667896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.667904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.667912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.667920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.667929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.667938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.667945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.667954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.667962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.667972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.667979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.667987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.667995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.668003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.668010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.668019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.668026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.668035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.668042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.668051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.668058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.668066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.668073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.668081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.668088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.668097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.668105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.668114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.668121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.668131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.668138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.668147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.668154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.668163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.668169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.668179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.668185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.668194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.668201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.668211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.563 [2024-12-13 09:31:34.668218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.563 [2024-12-13 09:31:34.668225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fbac0 is same with the state(6) to be set 00:20:22.563 [2024-12-13 09:31:34.669634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:22.563 [2024-12-13 09:31:34.669655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:22.563 [2024-12-13 09:31:34.669666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:22.563 [2024-12-13 09:31:34.669813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.563 [2024-12-13 09:31:34.669828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1664020 with addr=10.0.0.2, port=4420 00:20:22.563 [2024-12-13 09:31:34.669838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1664020 is same with the state(6) to be set 00:20:22.563 [2024-12-13 09:31:34.669928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.563 [2024-12-13 09:31:34.669939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1664f10 with addr=10.0.0.2, port=4420 00:20:22.563 [2024-12-13 09:31:34.669946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1664f10 is same with the state(6) to be set 00:20:22.563 [2024-12-13 09:31:34.670228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.563 [2024-12-13 09:31:34.670243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f7490 with addr=10.0.0.2, port=4420 00:20:22.563 [2024-12-13 09:31:34.670252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7490 is same with the state(6) to be set 00:20:22.563 [2024-12-13 09:31:34.670394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.563 [2024-12-13 09:31:34.670404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11eb4d0 with addr=10.0.0.2, port=4420 00:20:22.563 [2024-12-13 09:31:34.670411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11eb4d0 is same with the state(6) to be set 00:20:22.563 [2024-12-13 09:31:34.670528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.563 [2024-12-13 09:31:34.670539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec950 with addr=10.0.0.2, port=4420 00:20:22.563 [2024-12-13 09:31:34.670546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec950 is same with the state(6) to be set 00:20:22.563 [2024-12-13 09:31:34.670556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1664020 (9): Bad file descriptor 00:20:22.563 [2024-12-13 09:31:34.670566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1664f10 (9): Bad file descriptor 00:20:22.563 [2024-12-13 09:31:34.671256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:22.563 [2024-12-13 09:31:34.671273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:22.563 [2024-12-13 09:31:34.671292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7490 (9): Bad file descriptor 00:20:22.563 [2024-12-13 09:31:34.671308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11eb4d0 (9): Bad file descriptor 00:20:22.563 [2024-12-13 09:31:34.671316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec950 (9): Bad file descriptor 00:20:22.563 [2024-12-13 09:31:34.671325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:22.563 [2024-12-13 09:31:34.671332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:22.563 [2024-12-13 09:31:34.671340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:22.563 [2024-12-13 09:31:34.671348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:22.563 [2024-12-13 09:31:34.671356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:22.563 [2024-12-13 09:31:34.671364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:22.563 [2024-12-13 09:31:34.671371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:22.563 [2024-12-13 09:31:34.671378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:22.563 [2024-12-13 09:31:34.671556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.563 [2024-12-13 09:31:34.671572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f7000 with addr=10.0.0.2, port=4420 00:20:22.563 [2024-12-13 09:31:34.671580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7000 is same with the state(6) to be set 00:20:22.564 [2024-12-13 09:31:34.671734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.564 [2024-12-13 09:31:34.671744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x164d950 with addr=10.0.0.2, port=4420 00:20:22.564 [2024-12-13 09:31:34.671752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d950 is same with the state(6) to be set 00:20:22.564 [2024-12-13 09:31:34.671759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:22.564 [2024-12-13 09:31:34.671765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:22.564 [2024-12-13 09:31:34.671772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:22.564 [2024-12-13 09:31:34.671779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:22.564 [2024-12-13 09:31:34.671786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:22.564 [2024-12-13 09:31:34.671795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:22.564 [2024-12-13 09:31:34.671802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:22.564 [2024-12-13 09:31:34.671808] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:22.564 [2024-12-13 09:31:34.671815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:22.564 [2024-12-13 09:31:34.671823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:22.564 [2024-12-13 09:31:34.671829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:22.564 [2024-12-13 09:31:34.671835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:22.564 [2024-12-13 09:31:34.671877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7000 (9): Bad file descriptor 00:20:22.564 [2024-12-13 09:31:34.671887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164d950 (9): Bad file descriptor 00:20:22.564 [2024-12-13 09:31:34.671909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:22.564 [2024-12-13 09:31:34.671917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:22.564 [2024-12-13 09:31:34.671923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:22.564 [2024-12-13 09:31:34.671930] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:22.564 [2024-12-13 09:31:34.671937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:22.564 [2024-12-13 09:31:34.671944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:22.564 [2024-12-13 09:31:34.671950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:22.564 [2024-12-13 09:31:34.671956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:22.564 [2024-12-13 09:31:34.673000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.564 [2024-12-13 09:31:34.673489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.564 [2024-12-13 09:31:34.673499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.673991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.673998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.674007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.674014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.674023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.674030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.674039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.674047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.674056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.674063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.674071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fce40 is same with the state(6) to be set 00:20:22.565 [2024-12-13 09:31:34.675067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.675085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.675096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.675104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.675114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.675121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.675134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.675143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.675153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.565 [2024-12-13 09:31:34.675161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.565 [2024-12-13 09:31:34.675170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.566 [2024-12-13 09:31:34.675770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.566 [2024-12-13 09:31:34.675777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.675786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.675793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.675802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.675809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.675818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.675824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.675833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.675841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.675848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.675858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.675867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.675874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.675883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.675890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.675899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.675906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.675914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.675921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.675929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.675936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.675945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.675953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.675962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.675971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.675979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.675987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.675994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.676002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.676010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.676017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.676025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.676033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.676041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.676049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.676057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.676065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.676074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.676081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.676089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.676096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.676104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.676112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.676120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.676128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.676135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fe1c0 is same with the state(6) to be set 00:20:22.567 [2024-12-13 09:31:34.677118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.567 [2024-12-13 09:31:34.677422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.567 [2024-12-13 09:31:34.677431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.677990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.677999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.678006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.678016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.678022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.678031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.678038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.678047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.678053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.678062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.678069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.678078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.678085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.568 [2024-12-13 09:31:34.678094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.568 [2024-12-13 09:31:34.678100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.569 [2024-12-13 09:31:34.678109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.569 [2024-12-13 09:31:34.678116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.569 [2024-12-13 09:31:34.678125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.569 [2024-12-13 09:31:34.678132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.569 [2024-12-13 09:31:34.678141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.569 [2024-12-13 09:31:34.678147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.569 [2024-12-13 09:31:34.678159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.569 [2024-12-13 09:31:34.678166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.569 [2024-12-13 09:31:34.678174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.569 [2024-12-13 09:31:34.678181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.569 [2024-12-13 09:31:34.678190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f8d70 is same with the state(6) to be set 00:20:22.569 [2024-12-13 09:31:34.679142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:22.569 [2024-12-13 09:31:34.679160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:22.569 task offset: 24576 on job bdev=Nvme10n1 fails 00:20:22.569 00:20:22.569 Latency(us) 00:20:22.569 [2024-12-13T08:31:34.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.569 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.569 Job: Nvme1n1 ended in about 0.95 seconds with error 00:20:22.569 Verification LBA range: start 0x0 length 0x400 00:20:22.569 Nvme1n1 : 0.95 202.53 12.66 67.51 0.00 234611.57 32455.92 196732.83 00:20:22.569 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.569 Job: Nvme2n1 ended in about 0.95 seconds with error 00:20:22.569 Verification LBA range: start 0x0 length 0x400 00:20:22.569 Nvme2n1 : 0.95 202.10 12.63 67.37 0.00 231181.41 16976.94 211712.49 00:20:22.569 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.569 Job: Nvme3n1 ended in about 0.94 seconds with error 00:20:22.569 Verification LBA range: start 0x0 length 0x400 00:20:22.569 Nvme3n1 : 0.94 273.14 17.07 68.28 0.00 179225.55 14105.84 214708.42 00:20:22.569 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.569 Job: Nvme4n1 ended in about 0.95 seconds with error 00:20:22.569 Verification LBA range: start 0x0 length 0x400 00:20:22.569 Nvme4n1 : 0.95 201.66 12.60 67.22 0.00 223957.33 16602.45 212711.13 00:20:22.569 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.569 Job: Nvme5n1 ended in about 0.96 seconds with error 00:20:22.569 Verification LBA range: start 0x0 length 0x400 00:20:22.569 Nvme5n1 : 0.96 200.44 12.53 66.81 0.00 221591.41 16103.13 214708.42 00:20:22.569 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.569 Job: Nvme6n1 ended in about 0.96 seconds with error 00:20:22.569 Verification LBA range: start 0x0 length 0x400 00:20:22.569 Nvme6n1 : 0.96 204.18 12.76 66.67 0.00 214858.22 16727.28 230686.72 00:20:22.569 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.569 Job: Nvme7n1 ended in about 0.96 seconds with error 00:20:22.569 Verification LBA range: start 0x0 length 0x400 00:20:22.569 Nvme7n1 : 0.96 199.58 12.47 66.53 0.00 214857.39 16227.96 213709.78 00:20:22.569 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.569 Job: Nvme8n1 ended in about 0.94 seconds with error 00:20:22.569 Verification LBA range: start 0x0 length 0x400 00:20:22.569 Nvme8n1 : 0.94 274.17 17.14 56.10 0.00 169422.76 2933.52 208716.56 00:20:22.569 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.569 Job: Nvme9n1 ended in about 0.95 seconds with error 00:20:22.569 Verification LBA range: start 0x0 length 0x400 00:20:22.569 Nvme9n1 : 0.95 202.99 12.69 67.66 0.00 203099.86 6428.77 213709.78 00:20:22.569 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.569 Job: Nvme10n1 ended in about 0.94 seconds with error 00:20:22.569 Verification LBA range: start 0x0 length 0x400 00:20:22.569 Nvme10n1 : 0.94 205.23 12.83 68.41 0.00 196755.26 18100.42 229688.08 00:20:22.569 [2024-12-13T08:31:34.935Z] =================================================================================================================== 00:20:22.569 [2024-12-13T08:31:34.935Z] Total : 2166.01 135.38 662.57 0.00 207431.11 2933.52 230686.72 00:20:22.569 [2024-12-13 09:31:34.709293] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:22.569 [2024-12-13 09:31:34.709338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:22.569 [2024-12-13 09:31:34.709788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.569 [2024-12-13 09:31:34.709818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11eb2d0 with addr=10.0.0.2, port=4420 00:20:22.569 [2024-12-13 09:31:34.709830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11eb2d0 is same with the state(6) to be set 00:20:22.569 [2024-12-13 09:31:34.710006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.569 [2024-12-13 09:31:34.710019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1618290 with addr=10.0.0.2, port=4420 00:20:22.569 [2024-12-13 09:31:34.710027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1618290 is same with the state(6) to be set 00:20:22.569 [2024-12-13 09:31:34.710202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.569 [2024-12-13 09:31:34.710214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110c610 with addr=10.0.0.2, port=4420 00:20:22.569 [2024-12-13 09:31:34.710222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110c610 is same with the state(6) to be set 00:20:22.569 [2024-12-13 09:31:34.710273] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:20:22.569 [2024-12-13 09:31:34.710285] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:20:22.569 [2024-12-13 09:31:34.711211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:22.569 [2024-12-13 09:31:34.711232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:22.569 [2024-12-13 09:31:34.711287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11eb2d0 (9): Bad file descriptor 00:20:22.569 [2024-12-13 09:31:34.711302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1618290 (9): Bad file descriptor 00:20:22.569 [2024-12-13 09:31:34.711311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110c610 (9): Bad file descriptor 00:20:22.569 [2024-12-13 09:31:34.711359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:22.569 [2024-12-13 09:31:34.711369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:22.569 [2024-12-13 09:31:34.711379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:22.569 [2024-12-13 09:31:34.711387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:22.569 [2024-12-13 09:31:34.711396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:22.569 [2024-12-13 09:31:34.711509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.569 [2024-12-13 09:31:34.711523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1664f10 with addr=10.0.0.2, port=4420 00:20:22.569 [2024-12-13 09:31:34.711533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1664f10 is same with the state(6) to be set 00:20:22.569 [2024-12-13 09:31:34.711614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.569 [2024-12-13 09:31:34.711625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1664020 with addr=10.0.0.2, port=4420 00:20:22.569 [2024-12-13 09:31:34.711633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1664020 is same with the state(6) to be set 00:20:22.569 [2024-12-13 09:31:34.711641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:22.569 [2024-12-13 09:31:34.711647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:22.569 [2024-12-13 09:31:34.711655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:22.569 [2024-12-13 09:31:34.711668] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:22.569 [2024-12-13 09:31:34.711677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:22.569 [2024-12-13 09:31:34.711684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:22.569 [2024-12-13 09:31:34.711691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:22.569 [2024-12-13 09:31:34.711697] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:22.569 [2024-12-13 09:31:34.711704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:22.569 [2024-12-13 09:31:34.711710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:22.569 [2024-12-13 09:31:34.711717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:22.569 [2024-12-13 09:31:34.711723] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:22.569 [2024-12-13 09:31:34.711952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.569 [2024-12-13 09:31:34.711964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ec950 with addr=10.0.0.2, port=4420 00:20:22.569 [2024-12-13 09:31:34.711971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ec950 is same with the state(6) to be set 00:20:22.569 [2024-12-13 09:31:34.712122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.569 [2024-12-13 09:31:34.712134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11eb4d0 with addr=10.0.0.2, port=4420 00:20:22.569 [2024-12-13 09:31:34.712141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11eb4d0 is same with the state(6) to be set 00:20:22.569 [2024-12-13 09:31:34.712286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.569 [2024-12-13 09:31:34.712297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f7490 with addr=10.0.0.2, port=4420 00:20:22.570 [2024-12-13 09:31:34.712305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7490 is same with the state(6) to be set 00:20:22.570 [2024-12-13 09:31:34.712402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.570 [2024-12-13 09:31:34.712414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x164d950 with addr=10.0.0.2, port=4420 00:20:22.570 [2024-12-13 09:31:34.712422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164d950 is same with the state(6) to be set 00:20:22.570 [2024-12-13 09:31:34.712608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.570 [2024-12-13 09:31:34.712620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f7000 with addr=10.0.0.2, port=4420 00:20:22.570 [2024-12-13 09:31:34.712628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7000 is same with the state(6) to be set 00:20:22.570 [2024-12-13 09:31:34.712638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1664f10 (9): Bad file descriptor 00:20:22.570 [2024-12-13 09:31:34.712648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1664020 (9): Bad file descriptor 00:20:22.570 [2024-12-13 09:31:34.712676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ec950 (9): Bad file descriptor 00:20:22.570 [2024-12-13 09:31:34.712687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11eb4d0 (9): Bad file descriptor 00:20:22.570 [2024-12-13 09:31:34.712696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7490 (9): Bad file descriptor 00:20:22.570 [2024-12-13 09:31:34.712709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164d950 (9): Bad file descriptor 00:20:22.570 [2024-12-13 09:31:34.712718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7000 (9): Bad file descriptor 00:20:22.570 [2024-12-13 09:31:34.712728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:22.570 [2024-12-13 09:31:34.712734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:22.570 [2024-12-13 09:31:34.712742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:22.570 [2024-12-13 09:31:34.712749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:22.570 [2024-12-13 09:31:34.712756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:22.570 [2024-12-13 09:31:34.712763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:22.570 [2024-12-13 09:31:34.712770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:22.570 [2024-12-13 09:31:34.712776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:22.570 [2024-12-13 09:31:34.712802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:22.570 [2024-12-13 09:31:34.712810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:22.570 [2024-12-13 09:31:34.712817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:22.570 [2024-12-13 09:31:34.712823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:22.570 [2024-12-13 09:31:34.712830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:22.570 [2024-12-13 09:31:34.712837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:22.570 [2024-12-13 09:31:34.712843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:22.570 [2024-12-13 09:31:34.712849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:22.570 [2024-12-13 09:31:34.712856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:22.570 [2024-12-13 09:31:34.712863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:22.570 [2024-12-13 09:31:34.712869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:22.570 [2024-12-13 09:31:34.712875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:22.570 [2024-12-13 09:31:34.712882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:22.570 [2024-12-13 09:31:34.712888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:22.570 [2024-12-13 09:31:34.712896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:22.570 [2024-12-13 09:31:34.712902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:22.570 [2024-12-13 09:31:34.712909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:22.570 [2024-12-13 09:31:34.712915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:22.570 [2024-12-13 09:31:34.712923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:22.570 [2024-12-13 09:31:34.712931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:22.829 09:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3382218 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3382218 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3382218 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:23.766 rmmod nvme_tcp 00:20:23.766 rmmod nvme_fabrics 00:20:23.766 rmmod nvme_keyring 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3382101 ']' 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3382101 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3382101 ']' 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3382101 00:20:23.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3382101) - No such process 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3382101 is not found' 00:20:23.766 Process with pid 3382101 is not found 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:23.766 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:24.025 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:24.025 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:24.025 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.025 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:24.025 09:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:25.931 00:20:25.931 real 0m7.400s 00:20:25.931 user 0m17.965s 00:20:25.931 sys 0m1.322s 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.931 ************************************ 00:20:25.931 END TEST nvmf_shutdown_tc3 00:20:25.931 ************************************ 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:25.931 ************************************ 00:20:25.931 START TEST nvmf_shutdown_tc4 00:20:25.931 ************************************ 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.931 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:26.191 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:26.191 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:26.191 Found net devices under 0000:af:00.0: cvl_0_0 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:26.191 Found net devices under 0000:af:00.1: cvl_0_1 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:26.191 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:26.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:20:26.450 00:20:26.450 --- 10.0.0.2 ping statistics --- 00:20:26.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.450 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:20:26.450 00:20:26.450 --- 10.0.0.1 ping statistics --- 00:20:26.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.450 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3383403 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3383403 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3383403 ']' 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.450 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:26.450 [2024-12-13 09:31:38.758137] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:26.450 [2024-12-13 09:31:38.758185] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.709 [2024-12-13 09:31:38.825443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.709 [2024-12-13 09:31:38.866823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.709 [2024-12-13 09:31:38.866857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.709 [2024-12-13 09:31:38.866865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.709 [2024-12-13 09:31:38.866871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.709 [2024-12-13 09:31:38.866876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.709 [2024-12-13 09:31:38.868381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.709 [2024-12-13 09:31:38.868476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.709 [2024-12-13 09:31:38.868605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.709 [2024-12-13 09:31:38.868606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:26.709 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.709 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:20:26.709 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:26.709 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:26.709 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:26.709 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.709 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:26.709 09:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:26.709 [2024-12-13 09:31:39.006008] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.709 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:26.710 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.710 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:26.710 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.710 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:26.710 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.710 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:26.710 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.710 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:26.710 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:26.710 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.710 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:26.968 Malloc1 00:20:26.968 [2024-12-13 09:31:39.119154] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.968 Malloc2 00:20:26.968 Malloc3 00:20:26.968 Malloc4 00:20:26.968 Malloc5 00:20:26.968 Malloc6 00:20:27.227 Malloc7 00:20:27.227 Malloc8 00:20:27.227 Malloc9 00:20:27.227 Malloc10 00:20:27.227 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.227 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:27.227 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:27.227 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:27.227 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3383666 00:20:27.227 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:27.227 09:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:27.485 [2024-12-13 09:31:39.612649] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:32.762 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:32.762 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3383403 00:20:32.762 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3383403 ']' 00:20:32.762 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3383403 00:20:32.762 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:20:32.762 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.762 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3383403 00:20:32.762 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:32.762 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:32.762 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3383403' 00:20:32.762 killing process with pid 3383403 00:20:32.762 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3383403 00:20:32.762 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3383403 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 starting I/O failed: -6 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 starting I/O failed: -6 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 starting I/O failed: -6 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 starting I/O failed: -6 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 starting I/O failed: -6 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.762 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 [2024-12-13 09:31:44.622859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 [2024-12-13 09:31:44.623240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399d70 is same with the state(6) to be set 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 [2024-12-13 09:31:44.623279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399d70 is same with the state(6) to be set 00:20:32.763 [2024-12-13 09:31:44.623287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399d70 is same with the state(6) to be set 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 [2024-12-13 09:31:44.623294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399d70 is same with the state(6) to be set 00:20:32.763 [2024-12-13 09:31:44.623301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399d70 is same with Write completed with error (sct=0, sc=8) 00:20:32.763 the state(6) to be set 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 Write completed with error (sct=0, sc=8) 00:20:32.763 starting I/O failed: -6 00:20:32.763 [2024-12-13 09:31:44.624018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:32.763 NVMe io qpair process completion error 00:20:32.763 [2024-12-13 09:31:44.624596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260b090 is same with the state(6) to be set 00:20:32.763 [2024-12-13 09:31:44.624622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260b090 is same with the state(6) to be set 00:20:32.763 [2024-12-13 09:31:44.624630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260b090 is same with the state(6) to be set 00:20:32.763 [2024-12-13 09:31:44.625025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260b560 is same with the state(6) to be set 00:20:32.763 [2024-12-13 09:31:44.625045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260b560 is same with the state(6) to be set 00:20:32.763 [2024-12-13 09:31:44.625053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260b560 is same with the state(6) to be set 00:20:32.763 [2024-12-13 09:31:44.625059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260b560 is same with the state(6) to be set 00:20:32.764 [2024-12-13 09:31:44.625065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260b560 is same with the state(6) to be set 00:20:32.764 [2024-12-13 09:31:44.625072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260b560 is same with the state(6) to be set 00:20:32.764 [2024-12-13 09:31:44.625078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260b560 is same with the state(6) to be set 00:20:32.764 [2024-12-13 09:31:44.625084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260b560 is same with the state(6) to be set 00:20:32.764 [2024-12-13 09:31:44.625091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260b560 is same with the state(6) to be set 00:20:32.764 [2024-12-13 09:31:44.625097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260b560 is same with the state(6) to be set 00:20:32.764 [2024-12-13 09:31:44.625102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260b560 is same with the state(6) to be set 00:20:32.764 [2024-12-13 09:31:44.625108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260b560 is same with the state(6) to be set 00:20:32.764 [2024-12-13 09:31:44.625374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260abc0 is same with the state(6) to be set 00:20:32.764 [2024-12-13 09:31:44.625397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260abc0 is same with the state(6) to be set 00:20:32.764 [2024-12-13 09:31:44.625404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260abc0 is same with the state(6) to be set 00:20:32.764 [2024-12-13 09:31:44.625415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260abc0 is same with the state(6) to be set 00:20:32.764 [2024-12-13 09:31:44.625421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260abc0 is same with the state(6) to be set 00:20:32.764 [2024-12-13 09:31:44.625427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260abc0 is same with the state(6) to be set 00:20:32.764 [2024-12-13 09:31:44.625433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260abc0 is same with the state(6) to be set 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.764 starting I/O failed: -6 00:20:32.764 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 [2024-12-13 09:31:44.628114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:32.765 NVMe io qpair process completion error 00:20:32.765 [2024-12-13 09:31:44.629943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393d80 is same with the state(6) to be set 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 [2024-12-13 09:31:44.630496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23933e0 is same with starting I/O failed: -6 00:20:32.765 the state(6) to be set 00:20:32.765 [2024-12-13 09:31:44.630522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23933e0 is same with the state(6) to be set 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 [2024-12-13 09:31:44.630530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23933e0 is same with the state(6) to be set 00:20:32.765 [2024-12-13 09:31:44.630537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23933e0 is same with the state(6) to be set 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 [2024-12-13 09:31:44.630760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:32.765 NVMe io qpair process completion error 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 [2024-12-13 09:31:44.631729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 Write completed with error (sct=0, sc=8) 00:20:32.765 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 [2024-12-13 09:31:44.632605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 [2024-12-13 09:31:44.633609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.766 Write completed with error (sct=0, sc=8) 00:20:32.766 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 [2024-12-13 09:31:44.635254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:32.767 NVMe io qpair process completion error 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 [2024-12-13 09:31:44.636172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 [2024-12-13 09:31:44.637070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 starting I/O failed: -6 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.767 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 [2024-12-13 09:31:44.638088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 [2024-12-13 09:31:44.639907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:32.768 NVMe io qpair process completion error 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 starting I/O failed: -6 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.768 [2024-12-13 09:31:44.640870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:32.768 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 [2024-12-13 09:31:44.641768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 [2024-12-13 09:31:44.642791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.769 Write completed with error (sct=0, sc=8) 00:20:32.769 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 [2024-12-13 09:31:44.644394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:32.770 NVMe io qpair process completion error 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 [2024-12-13 09:31:44.645372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 [2024-12-13 09:31:44.646242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 Write completed with error (sct=0, sc=8) 00:20:32.770 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 [2024-12-13 09:31:44.647271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 [2024-12-13 09:31:44.651445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:32.771 NVMe io qpair process completion error 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 starting I/O failed: -6 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.771 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 [2024-12-13 09:31:44.652477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 [2024-12-13 09:31:44.653320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 [2024-12-13 09:31:44.654343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.772 Write completed with error (sct=0, sc=8) 00:20:32.772 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 [2024-12-13 09:31:44.657439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:32.773 NVMe io qpair process completion error 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 [2024-12-13 09:31:44.658434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 [2024-12-13 09:31:44.659357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 Write completed with error (sct=0, sc=8) 00:20:32.773 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 [2024-12-13 09:31:44.660364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 [2024-12-13 09:31:44.662279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:32.774 NVMe io qpair process completion error 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.774 starting I/O failed: -6 00:20:32.774 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 [2024-12-13 09:31:44.663319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 [2024-12-13 09:31:44.664244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 [2024-12-13 09:31:44.665236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.775 starting I/O failed: -6 00:20:32.775 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 starting I/O failed: -6 00:20:32.776 [2024-12-13 09:31:44.669650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:32.776 NVMe io qpair process completion error 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.776 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Write completed with error (sct=0, sc=8) 00:20:32.777 Initializing NVMe Controllers 00:20:32.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:20:32.777 Controller IO queue size 128, less than required. 00:20:32.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:32.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:20:32.777 Controller IO queue size 128, less than required. 00:20:32.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:32.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:20:32.777 Controller IO queue size 128, less than required. 00:20:32.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:32.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:20:32.777 Controller IO queue size 128, less than required. 00:20:32.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:32.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:20:32.777 Controller IO queue size 128, less than required. 00:20:32.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:32.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:20:32.777 Controller IO queue size 128, less than required. 00:20:32.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:32.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:20:32.777 Controller IO queue size 128, less than required. 00:20:32.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:32.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:20:32.777 Controller IO queue size 128, less than required. 00:20:32.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:32.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:20:32.777 Controller IO queue size 128, less than required. 00:20:32.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:32.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.777 Controller IO queue size 128, less than required. 00:20:32.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:32.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:20:32.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:20:32.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:20:32.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:20:32.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:20:32.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:20:32.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:20:32.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:20:32.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:20:32.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:32.777 Initialization complete. Launching workers. 00:20:32.777 ======================================================== 00:20:32.777 Latency(us) 00:20:32.777 Device Information : IOPS MiB/s Average min max 00:20:32.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2195.00 94.32 58633.87 687.09 110664.96 00:20:32.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2161.44 92.87 59825.28 502.60 122683.26 00:20:32.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2171.49 93.31 58944.48 917.93 107508.25 00:20:32.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2171.27 93.30 58961.49 905.83 104930.81 00:20:32.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2208.68 94.90 57976.21 899.87 103017.48 00:20:32.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2211.89 95.04 57904.79 728.70 103881.48 00:20:32.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2223.00 95.52 57657.52 877.95 106600.49 00:20:32.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2236.04 96.08 57349.56 737.04 110588.16 00:20:32.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2269.82 97.53 56511.36 740.52 112478.19 00:20:32.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2227.49 95.71 57283.32 504.87 101877.59 00:20:32.777 ======================================================== 00:20:32.777 Total : 22076.13 948.58 58091.54 502.60 122683.26 00:20:32.777 00:20:32.777 [2024-12-13 09:31:44.685637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022ae0 is same with the state(6) to be set 00:20:32.777 [2024-12-13 09:31:44.685695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1020ef0 is same with the state(6) to be set 00:20:32.778 [2024-12-13 09:31:44.685728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1020bc0 is same with the state(6) to be set 00:20:32.778 [2024-12-13 09:31:44.685760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1020890 is same with the state(6) to be set 00:20:32.778 [2024-12-13 09:31:44.685791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1020560 is same with the state(6) to be set 00:20:32.778 [2024-12-13 09:31:44.685821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1021740 is same with the state(6) to be set 00:20:32.778 [2024-12-13 09:31:44.685853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022900 is same with the state(6) to be set 00:20:32.778 [2024-12-13 09:31:44.685884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1021a70 is same with the state(6) to be set 00:20:32.778 [2024-12-13 09:31:44.685915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1021410 is same with the state(6) to be set 00:20:32.778 [2024-12-13 09:31:44.685945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022720 is same with the state(6) to be set 00:20:32.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:32.778 09:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:20:33.713 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3383666 00:20:33.713 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:20:33.713 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3383666 00:20:33.713 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:33.713 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:20:33.713 09:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3383666 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.713 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.713 rmmod nvme_tcp 00:20:33.713 rmmod nvme_fabrics 00:20:33.713 rmmod nvme_keyring 00:20:33.972 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3383403 ']' 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3383403 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3383403 ']' 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3383403 00:20:33.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3383403) - No such process 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3383403 is not found' 00:20:33.973 Process with pid 3383403 is not found 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.973 09:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.877 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:35.877 00:20:35.877 real 0m9.900s 00:20:35.877 user 0m24.955s 00:20:35.877 sys 0m5.151s 00:20:35.877 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.877 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:35.877 ************************************ 00:20:35.877 END TEST nvmf_shutdown_tc4 00:20:35.877 ************************************ 00:20:35.877 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:20:35.877 00:20:35.877 real 0m39.920s 00:20:35.877 user 1m37.469s 00:20:35.877 sys 0m13.623s 00:20:35.877 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.877 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:35.877 ************************************ 00:20:35.877 END TEST nvmf_shutdown 00:20:35.877 ************************************ 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:36.136 ************************************ 00:20:36.136 START TEST nvmf_nsid 00:20:36.136 ************************************ 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:36.136 * Looking for test storage... 00:20:36.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:36.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.136 --rc genhtml_branch_coverage=1 00:20:36.136 --rc genhtml_function_coverage=1 00:20:36.136 --rc genhtml_legend=1 00:20:36.136 --rc geninfo_all_blocks=1 00:20:36.136 --rc geninfo_unexecuted_blocks=1 00:20:36.136 00:20:36.136 ' 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:36.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.136 --rc genhtml_branch_coverage=1 00:20:36.136 --rc genhtml_function_coverage=1 00:20:36.136 --rc genhtml_legend=1 00:20:36.136 --rc geninfo_all_blocks=1 00:20:36.136 --rc geninfo_unexecuted_blocks=1 00:20:36.136 00:20:36.136 ' 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:36.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.136 --rc genhtml_branch_coverage=1 00:20:36.136 --rc genhtml_function_coverage=1 00:20:36.136 --rc genhtml_legend=1 00:20:36.136 --rc geninfo_all_blocks=1 00:20:36.136 --rc geninfo_unexecuted_blocks=1 00:20:36.136 00:20:36.136 ' 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:36.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.136 --rc genhtml_branch_coverage=1 00:20:36.136 --rc genhtml_function_coverage=1 00:20:36.136 --rc genhtml_legend=1 00:20:36.136 --rc geninfo_all_blocks=1 00:20:36.136 --rc geninfo_unexecuted_blocks=1 00:20:36.136 00:20:36.136 ' 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:36.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:20:36.136 09:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:42.705 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:42.705 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:42.705 Found net devices under 0000:af:00.0: cvl_0_0 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:42.705 Found net devices under 0000:af:00.1: cvl_0_1 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:42.705 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.706 09:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:42.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:20:42.706 00:20:42.706 --- 10.0.0.2 ping statistics --- 00:20:42.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.706 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:20:42.706 00:20:42.706 --- 10.0.0.1 ping statistics --- 00:20:42.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.706 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3388094 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3388094 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3388094 ']' 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:42.706 [2024-12-13 09:31:54.282663] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:42.706 [2024-12-13 09:31:54.282706] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.706 [2024-12-13 09:31:54.351019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.706 [2024-12-13 09:31:54.393674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.706 [2024-12-13 09:31:54.393710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.706 [2024-12-13 09:31:54.393718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.706 [2024-12-13 09:31:54.393724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.706 [2024-12-13 09:31:54.393729] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.706 [2024-12-13 09:31:54.394241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3388273 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=e3fb4456-c979-4da1-b4db-edd3cdb2dd2a 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=30cab002-73b7-47ef-a69c-7bd3b8b9de76 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=1584a10f-eff1-4804-b7bd-452d2844a499 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:42.706 null0 00:20:42.706 null1 00:20:42.706 [2024-12-13 09:31:54.578634] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:42.706 [2024-12-13 09:31:54.578677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3388273 ] 00:20:42.706 null2 00:20:42.706 [2024-12-13 09:31:54.583112] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.706 [2024-12-13 09:31:54.607312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.706 [2024-12-13 09:31:54.640584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3388273 /var/tmp/tgt2.sock 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3388273 ']' 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:42.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:42.706 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.707 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:42.707 [2024-12-13 09:31:54.680978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.707 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.707 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:42.707 09:31:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:42.964 [2024-12-13 09:31:55.200318] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.964 [2024-12-13 09:31:55.216409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:42.964 nvme0n1 nvme0n2 00:20:42.964 nvme1n1 00:20:42.964 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:42.964 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:42.964 09:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:44.341 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:44.341 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:44.341 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:44.341 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:44.341 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:20:44.341 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:44.341 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:44.341 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:44.341 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:44.341 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:44.341 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:20:44.341 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:20:44.341 09:31:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid e3fb4456-c979-4da1-b4db-edd3cdb2dd2a 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e3fb4456c9794da1b4dbedd3cdb2dd2a 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E3FB4456C9794DA1B4DBEDD3CDB2DD2A 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ E3FB4456C9794DA1B4DBEDD3CDB2DD2A == \E\3\F\B\4\4\5\6\C\9\7\9\4\D\A\1\B\4\D\B\E\D\D\3\C\D\B\2\D\D\2\A ]] 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 30cab002-73b7-47ef-a69c-7bd3b8b9de76 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=30cab00273b747efa69c7bd3b8b9de76 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 30CAB00273B747EFA69C7BD3B8B9DE76 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 30CAB00273B747EFA69C7BD3B8B9DE76 == \3\0\C\A\B\0\0\2\7\3\B\7\4\7\E\F\A\6\9\C\7\B\D\3\B\8\B\9\D\E\7\6 ]] 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 1584a10f-eff1-4804-b7bd-452d2844a499 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1584a10feff14804b7bd452d2844a499 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1584A10FEFF14804B7BD452D2844A499 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 1584A10FEFF14804B7BD452D2844A499 == \1\5\8\4\A\1\0\F\E\F\F\1\4\8\0\4\B\7\B\D\4\5\2\D\2\8\4\4\A\4\9\9 ]] 00:20:45.278 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:45.537 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:45.537 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:45.537 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3388273 00:20:45.537 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3388273 ']' 00:20:45.537 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3388273 00:20:45.537 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:45.537 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.537 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3388273 00:20:45.537 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:45.537 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:45.537 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3388273' 00:20:45.537 killing process with pid 3388273 00:20:45.537 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3388273 00:20:45.537 09:31:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3388273 00:20:45.796 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:45.796 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:45.796 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:45.796 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:45.796 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:45.796 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:45.796 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:45.796 rmmod nvme_tcp 00:20:45.796 rmmod nvme_fabrics 00:20:45.796 rmmod nvme_keyring 00:20:45.796 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:45.796 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:45.796 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:45.796 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3388094 ']' 00:20:45.796 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3388094 00:20:45.797 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3388094 ']' 00:20:45.797 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3388094 00:20:45.797 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:45.797 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.797 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3388094 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3388094' 00:20:46.056 killing process with pid 3388094 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3388094 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3388094 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.056 09:31:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.592 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:48.592 00:20:48.592 real 0m12.171s 00:20:48.592 user 0m9.578s 00:20:48.592 sys 0m5.314s 00:20:48.592 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.592 09:32:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:48.592 ************************************ 00:20:48.592 END TEST nvmf_nsid 00:20:48.592 ************************************ 00:20:48.592 09:32:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:48.592 00:20:48.592 real 11m44.421s 00:20:48.592 user 25m22.227s 00:20:48.592 sys 3m34.081s 00:20:48.592 09:32:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.592 09:32:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:48.592 ************************************ 00:20:48.592 END TEST nvmf_target_extra 00:20:48.592 ************************************ 00:20:48.592 09:32:00 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:48.592 09:32:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:48.592 09:32:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.592 09:32:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:48.592 ************************************ 00:20:48.592 START TEST nvmf_host 00:20:48.592 ************************************ 00:20:48.592 09:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:48.592 * Looking for test storage... 00:20:48.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:20:48.592 09:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:48.592 09:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:20:48.592 09:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:48.592 09:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:48.592 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.592 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.592 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.592 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.592 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.592 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.592 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:48.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.593 --rc genhtml_branch_coverage=1 00:20:48.593 --rc genhtml_function_coverage=1 00:20:48.593 --rc genhtml_legend=1 00:20:48.593 --rc geninfo_all_blocks=1 00:20:48.593 --rc geninfo_unexecuted_blocks=1 00:20:48.593 00:20:48.593 ' 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:48.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.593 --rc genhtml_branch_coverage=1 00:20:48.593 --rc genhtml_function_coverage=1 00:20:48.593 --rc genhtml_legend=1 00:20:48.593 --rc geninfo_all_blocks=1 00:20:48.593 --rc geninfo_unexecuted_blocks=1 00:20:48.593 00:20:48.593 ' 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:48.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.593 --rc genhtml_branch_coverage=1 00:20:48.593 --rc genhtml_function_coverage=1 00:20:48.593 --rc genhtml_legend=1 00:20:48.593 --rc geninfo_all_blocks=1 00:20:48.593 --rc geninfo_unexecuted_blocks=1 00:20:48.593 00:20:48.593 ' 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:48.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.593 --rc genhtml_branch_coverage=1 00:20:48.593 --rc genhtml_function_coverage=1 00:20:48.593 --rc genhtml_legend=1 00:20:48.593 --rc geninfo_all_blocks=1 00:20:48.593 --rc geninfo_unexecuted_blocks=1 00:20:48.593 00:20:48.593 ' 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:48.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.593 ************************************ 00:20:48.593 START TEST nvmf_multicontroller 00:20:48.593 ************************************ 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:48.593 * Looking for test storage... 00:20:48.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.593 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.594 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:20:48.852 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:20:48.852 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.852 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:20:48.852 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.852 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:20:48.852 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:48.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.853 --rc genhtml_branch_coverage=1 00:20:48.853 --rc genhtml_function_coverage=1 00:20:48.853 --rc genhtml_legend=1 00:20:48.853 --rc geninfo_all_blocks=1 00:20:48.853 --rc geninfo_unexecuted_blocks=1 00:20:48.853 00:20:48.853 ' 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:48.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.853 --rc genhtml_branch_coverage=1 00:20:48.853 --rc genhtml_function_coverage=1 00:20:48.853 --rc genhtml_legend=1 00:20:48.853 --rc geninfo_all_blocks=1 00:20:48.853 --rc geninfo_unexecuted_blocks=1 00:20:48.853 00:20:48.853 ' 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:48.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.853 --rc genhtml_branch_coverage=1 00:20:48.853 --rc genhtml_function_coverage=1 00:20:48.853 --rc genhtml_legend=1 00:20:48.853 --rc geninfo_all_blocks=1 00:20:48.853 --rc geninfo_unexecuted_blocks=1 00:20:48.853 00:20:48.853 ' 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:48.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.853 --rc genhtml_branch_coverage=1 00:20:48.853 --rc genhtml_function_coverage=1 00:20:48.853 --rc genhtml_legend=1 00:20:48.853 --rc geninfo_all_blocks=1 00:20:48.853 --rc geninfo_unexecuted_blocks=1 00:20:48.853 00:20:48.853 ' 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:48.853 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:48.867 09:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:48.867 09:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:48.867 09:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.867 09:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:48.867 09:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:48.867 09:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:48.867 09:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.867 09:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.867 09:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.867 09:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:48.867 09:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:48.867 09:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:20:48.867 09:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.137 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:54.138 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:54.138 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:54.138 Found net devices under 0000:af:00.0: cvl_0_0 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:54.138 Found net devices under 0000:af:00.1: cvl_0_1 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:54.138 09:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:54.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:20:54.138 00:20:54.138 --- 10.0.0.2 ping statistics --- 00:20:54.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.138 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:54.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:20:54.138 00:20:54.138 --- 10.0.0.1 ping statistics --- 00:20:54.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.138 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3392290 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3392290 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3392290 ']' 00:20:54.138 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.139 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.139 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.139 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.139 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.139 [2024-12-13 09:32:06.324663] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:54.139 [2024-12-13 09:32:06.324715] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.139 [2024-12-13 09:32:06.392005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:54.139 [2024-12-13 09:32:06.431961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.139 [2024-12-13 09:32:06.432002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.139 [2024-12-13 09:32:06.432009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.139 [2024-12-13 09:32:06.432015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.139 [2024-12-13 09:32:06.432019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.139 [2024-12-13 09:32:06.433358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.139 [2024-12-13 09:32:06.433454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.139 [2024-12-13 09:32:06.433457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.398 [2024-12-13 09:32:06.577688] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.398 Malloc0 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.398 [2024-12-13 09:32:06.641954] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.398 [2024-12-13 09:32:06.649873] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.398 Malloc1 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3392366 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3392366 /var/tmp/bdevperf.sock 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3392366 ']' 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.398 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.658 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.658 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:54.658 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:54.658 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.658 09:32:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.919 NVMe0n1 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.919 1 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.919 request: 00:20:54.919 { 00:20:54.919 "name": "NVMe0", 00:20:54.919 "trtype": "tcp", 00:20:54.919 "traddr": "10.0.0.2", 00:20:54.919 "adrfam": "ipv4", 00:20:54.919 "trsvcid": "4420", 00:20:54.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.919 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:54.919 "hostaddr": "10.0.0.1", 00:20:54.919 "prchk_reftag": false, 00:20:54.919 "prchk_guard": false, 00:20:54.919 "hdgst": false, 00:20:54.919 "ddgst": false, 00:20:54.919 "allow_unrecognized_csi": false, 00:20:54.919 "method": "bdev_nvme_attach_controller", 00:20:54.919 "req_id": 1 00:20:54.919 } 00:20:54.919 Got JSON-RPC error response 00:20:54.919 response: 00:20:54.919 { 00:20:54.919 "code": -114, 00:20:54.919 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:54.919 } 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:54.919 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.920 request: 00:20:54.920 { 00:20:54.920 "name": "NVMe0", 00:20:54.920 "trtype": "tcp", 00:20:54.920 "traddr": "10.0.0.2", 00:20:54.920 "adrfam": "ipv4", 00:20:54.920 "trsvcid": "4420", 00:20:54.920 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:54.920 "hostaddr": "10.0.0.1", 00:20:54.920 "prchk_reftag": false, 00:20:54.920 "prchk_guard": false, 00:20:54.920 "hdgst": false, 00:20:54.920 "ddgst": false, 00:20:54.920 "allow_unrecognized_csi": false, 00:20:54.920 "method": "bdev_nvme_attach_controller", 00:20:54.920 "req_id": 1 00:20:54.920 } 00:20:54.920 Got JSON-RPC error response 00:20:54.920 response: 00:20:54.920 { 00:20:54.920 "code": -114, 00:20:54.920 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:54.920 } 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.920 request: 00:20:54.920 { 00:20:54.920 "name": "NVMe0", 00:20:54.920 "trtype": "tcp", 00:20:54.920 "traddr": "10.0.0.2", 00:20:54.920 "adrfam": "ipv4", 00:20:54.920 "trsvcid": "4420", 00:20:54.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.920 "hostaddr": "10.0.0.1", 00:20:54.920 "prchk_reftag": false, 00:20:54.920 "prchk_guard": false, 00:20:54.920 "hdgst": false, 00:20:54.920 "ddgst": false, 00:20:54.920 "multipath": "disable", 00:20:54.920 "allow_unrecognized_csi": false, 00:20:54.920 "method": "bdev_nvme_attach_controller", 00:20:54.920 "req_id": 1 00:20:54.920 } 00:20:54.920 Got JSON-RPC error response 00:20:54.920 response: 00:20:54.920 { 00:20:54.920 "code": -114, 00:20:54.920 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:20:54.920 } 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.920 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.179 request: 00:20:55.179 { 00:20:55.179 "name": "NVMe0", 00:20:55.179 "trtype": "tcp", 00:20:55.179 "traddr": "10.0.0.2", 00:20:55.179 "adrfam": "ipv4", 00:20:55.179 "trsvcid": "4420", 00:20:55.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.179 "hostaddr": "10.0.0.1", 00:20:55.179 "prchk_reftag": false, 00:20:55.179 "prchk_guard": false, 00:20:55.179 "hdgst": false, 00:20:55.179 "ddgst": false, 00:20:55.179 "multipath": "failover", 00:20:55.179 "allow_unrecognized_csi": false, 00:20:55.179 "method": "bdev_nvme_attach_controller", 00:20:55.179 "req_id": 1 00:20:55.179 } 00:20:55.179 Got JSON-RPC error response 00:20:55.179 response: 00:20:55.179 { 00:20:55.179 "code": -114, 00:20:55.179 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:55.179 } 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.179 NVMe0n1 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.179 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.438 00:20:55.438 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.438 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:55.438 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:55.438 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.438 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.438 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.438 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:55.438 09:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:56.817 { 00:20:56.817 "results": [ 00:20:56.817 { 00:20:56.817 "job": "NVMe0n1", 00:20:56.817 "core_mask": "0x1", 00:20:56.817 "workload": "write", 00:20:56.817 "status": "finished", 00:20:56.817 "queue_depth": 128, 00:20:56.817 "io_size": 4096, 00:20:56.817 "runtime": 1.007757, 00:20:56.817 "iops": 25587.51762577685, 00:20:56.817 "mibps": 99.95124072569082, 00:20:56.817 "io_failed": 0, 00:20:56.817 "io_timeout": 0, 00:20:56.817 "avg_latency_us": 4995.6598281090155, 00:20:56.817 "min_latency_us": 1482.3619047619047, 00:20:56.817 "max_latency_us": 9861.60761904762 00:20:56.817 } 00:20:56.817 ], 00:20:56.817 "core_count": 1 00:20:56.817 } 00:20:56.817 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:56.817 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.817 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:56.817 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.817 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:20:56.817 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3392366 00:20:56.817 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3392366 ']' 00:20:56.817 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3392366 00:20:56.818 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:20:56.818 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.818 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3392366 00:20:56.818 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.818 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.818 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3392366' 00:20:56.818 killing process with pid 3392366 00:20:56.818 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3392366 00:20:56.818 09:32:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3392366 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:20:56.818 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:56.818 [2024-12-13 09:32:06.754745] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:20:56.818 [2024-12-13 09:32:06.754792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3392366 ] 00:20:56.818 [2024-12-13 09:32:06.817709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.818 [2024-12-13 09:32:06.858218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.818 [2024-12-13 09:32:07.646928] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 84e76c14-8cf2-4407-ab37-34253ccf80fe already exists 00:20:56.818 [2024-12-13 09:32:07.646954] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:84e76c14-8cf2-4407-ab37-34253ccf80fe alias for bdev NVMe1n1 00:20:56.818 [2024-12-13 09:32:07.646963] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:56.818 Running I/O for 1 seconds... 00:20:56.818 25531.00 IOPS, 99.73 MiB/s 00:20:56.818 Latency(us) 00:20:56.818 [2024-12-13T08:32:09.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.818 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:56.818 NVMe0n1 : 1.01 25587.52 99.95 0.00 0.00 4995.66 1482.36 9861.61 00:20:56.818 [2024-12-13T08:32:09.184Z] =================================================================================================================== 00:20:56.818 [2024-12-13T08:32:09.184Z] Total : 25587.52 99.95 0.00 0.00 4995.66 1482.36 9861.61 00:20:56.818 Received shutdown signal, test time was about 1.000000 seconds 00:20:56.818 00:20:56.818 Latency(us) 00:20:56.818 [2024-12-13T08:32:09.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.818 [2024-12-13T08:32:09.184Z] =================================================================================================================== 00:20:56.818 [2024-12-13T08:32:09.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:56.818 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:56.818 rmmod nvme_tcp 00:20:56.818 rmmod nvme_fabrics 00:20:56.818 rmmod nvme_keyring 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3392290 ']' 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3392290 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3392290 ']' 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3392290 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3392290 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3392290' 00:20:56.818 killing process with pid 3392290 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3392290 00:20:56.818 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3392290 00:20:57.077 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:57.077 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:57.077 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:57.077 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:20:57.077 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:20:57.077 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:57.077 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:20:57.077 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:57.077 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:57.077 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.077 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.077 09:32:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:59.615 00:20:59.615 real 0m10.668s 00:20:59.615 user 0m12.797s 00:20:59.615 sys 0m4.637s 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.615 ************************************ 00:20:59.615 END TEST nvmf_multicontroller 00:20:59.615 ************************************ 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.615 ************************************ 00:20:59.615 START TEST nvmf_aer 00:20:59.615 ************************************ 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:59.615 * Looking for test storage... 00:20:59.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:59.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.615 --rc genhtml_branch_coverage=1 00:20:59.615 --rc genhtml_function_coverage=1 00:20:59.615 --rc genhtml_legend=1 00:20:59.615 --rc geninfo_all_blocks=1 00:20:59.615 --rc geninfo_unexecuted_blocks=1 00:20:59.615 00:20:59.615 ' 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:59.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.615 --rc genhtml_branch_coverage=1 00:20:59.615 --rc genhtml_function_coverage=1 00:20:59.615 --rc genhtml_legend=1 00:20:59.615 --rc geninfo_all_blocks=1 00:20:59.615 --rc geninfo_unexecuted_blocks=1 00:20:59.615 00:20:59.615 ' 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:59.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.615 --rc genhtml_branch_coverage=1 00:20:59.615 --rc genhtml_function_coverage=1 00:20:59.615 --rc genhtml_legend=1 00:20:59.615 --rc geninfo_all_blocks=1 00:20:59.615 --rc geninfo_unexecuted_blocks=1 00:20:59.615 00:20:59.615 ' 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:59.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.615 --rc genhtml_branch_coverage=1 00:20:59.615 --rc genhtml_function_coverage=1 00:20:59.615 --rc genhtml_legend=1 00:20:59.615 --rc geninfo_all_blocks=1 00:20:59.615 --rc geninfo_unexecuted_blocks=1 00:20:59.615 00:20:59.615 ' 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.615 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:59.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:20:59.616 09:32:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:04.893 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:04.893 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:04.893 Found net devices under 0000:af:00.0: cvl_0_0 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:04.893 Found net devices under 0000:af:00.1: cvl_0_1 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:04.893 09:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:04.893 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:04.893 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:04.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:21:04.894 00:21:04.894 --- 10.0.0.2 ping statistics --- 00:21:04.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.894 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:04.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:21:04.894 00:21:04.894 --- 10.0.0.1 ping statistics --- 00:21:04.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.894 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3396236 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3396236 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3396236 ']' 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.894 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:04.894 [2024-12-13 09:32:17.231599] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:21:04.894 [2024-12-13 09:32:17.231648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.154 [2024-12-13 09:32:17.301488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:05.154 [2024-12-13 09:32:17.343754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.154 [2024-12-13 09:32:17.343791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.154 [2024-12-13 09:32:17.343797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.154 [2024-12-13 09:32:17.343803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.154 [2024-12-13 09:32:17.343808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.154 [2024-12-13 09:32:17.345255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.154 [2024-12-13 09:32:17.345349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.154 [2024-12-13 09:32:17.345440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.154 [2024-12-13 09:32:17.345441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:05.154 [2024-12-13 09:32:17.483352] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:05.154 Malloc0 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.154 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:05.413 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.413 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:05.413 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.413 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:05.413 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.413 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:05.413 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.413 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:05.413 [2024-12-13 09:32:17.541777] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.413 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.413 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:05.413 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.413 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:05.413 [ 00:21:05.413 { 00:21:05.413 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:05.413 "subtype": "Discovery", 00:21:05.413 "listen_addresses": [], 00:21:05.413 "allow_any_host": true, 00:21:05.413 "hosts": [] 00:21:05.413 }, 00:21:05.413 { 00:21:05.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.413 "subtype": "NVMe", 00:21:05.413 "listen_addresses": [ 00:21:05.413 { 00:21:05.413 "trtype": "TCP", 00:21:05.413 "adrfam": "IPv4", 00:21:05.413 "traddr": "10.0.0.2", 00:21:05.413 "trsvcid": "4420" 00:21:05.413 } 00:21:05.413 ], 00:21:05.413 "allow_any_host": true, 00:21:05.413 "hosts": [], 00:21:05.413 "serial_number": "SPDK00000000000001", 00:21:05.413 "model_number": "SPDK bdev Controller", 00:21:05.413 "max_namespaces": 2, 00:21:05.413 "min_cntlid": 1, 00:21:05.413 "max_cntlid": 65519, 00:21:05.414 "namespaces": [ 00:21:05.414 { 00:21:05.414 "nsid": 1, 00:21:05.414 "bdev_name": "Malloc0", 00:21:05.414 "name": "Malloc0", 00:21:05.414 "nguid": "03DE0AAA2C914DE7AF28536023A3E4D2", 00:21:05.414 "uuid": "03de0aaa-2c91-4de7-af28-536023a3e4d2" 00:21:05.414 } 00:21:05.414 ] 00:21:05.414 } 00:21:05.414 ] 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3396276 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:21:05.414 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:05.673 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:05.673 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:05.673 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:05.673 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:05.673 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.673 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:05.673 Malloc1 00:21:05.673 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.673 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:05.673 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.673 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:05.673 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.673 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:05.673 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.673 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:05.673 Asynchronous Event Request test 00:21:05.673 Attaching to 10.0.0.2 00:21:05.673 Attached to 10.0.0.2 00:21:05.673 Registering asynchronous event callbacks... 00:21:05.673 Starting namespace attribute notice tests for all controllers... 00:21:05.673 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:05.673 aer_cb - Changed Namespace 00:21:05.673 Cleaning up... 00:21:05.673 [ 00:21:05.673 { 00:21:05.673 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:05.673 "subtype": "Discovery", 00:21:05.673 "listen_addresses": [], 00:21:05.673 "allow_any_host": true, 00:21:05.673 "hosts": [] 00:21:05.673 }, 00:21:05.673 { 00:21:05.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.673 "subtype": "NVMe", 00:21:05.673 "listen_addresses": [ 00:21:05.673 { 00:21:05.673 "trtype": "TCP", 00:21:05.673 "adrfam": "IPv4", 00:21:05.673 "traddr": "10.0.0.2", 00:21:05.673 "trsvcid": "4420" 00:21:05.673 } 00:21:05.673 ], 00:21:05.673 "allow_any_host": true, 00:21:05.673 "hosts": [], 00:21:05.673 "serial_number": "SPDK00000000000001", 00:21:05.674 "model_number": "SPDK bdev Controller", 00:21:05.674 "max_namespaces": 2, 00:21:05.674 "min_cntlid": 1, 00:21:05.674 "max_cntlid": 65519, 00:21:05.674 "namespaces": [ 00:21:05.674 { 00:21:05.674 "nsid": 1, 00:21:05.674 "bdev_name": "Malloc0", 00:21:05.674 "name": "Malloc0", 00:21:05.674 "nguid": "03DE0AAA2C914DE7AF28536023A3E4D2", 00:21:05.674 "uuid": "03de0aaa-2c91-4de7-af28-536023a3e4d2" 00:21:05.674 }, 00:21:05.674 { 00:21:05.674 "nsid": 2, 00:21:05.674 "bdev_name": "Malloc1", 00:21:05.674 "name": "Malloc1", 00:21:05.674 "nguid": "D3ED5AF3C7614BA8AE6B3973E33877C6", 00:21:05.674 "uuid": "d3ed5af3-c761-4ba8-ae6b-3973e33877c6" 00:21:05.674 } 00:21:05.674 ] 00:21:05.674 } 00:21:05.674 ] 00:21:05.674 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.674 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3396276 00:21:05.674 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:05.674 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.674 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:05.674 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.674 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:05.674 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.674 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:05.674 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.674 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.674 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.674 09:32:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:05.674 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.674 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:05.674 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:05.674 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:05.674 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:05.674 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:05.674 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:05.674 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:05.674 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:05.674 rmmod nvme_tcp 00:21:05.674 rmmod nvme_fabrics 00:21:05.933 rmmod nvme_keyring 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3396236 ']' 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3396236 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3396236 ']' 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3396236 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3396236 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3396236' 00:21:05.933 killing process with pid 3396236 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3396236 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3396236 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:05.933 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:06.193 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:06.193 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:06.193 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:06.193 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:06.193 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:06.193 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:06.193 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:06.193 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.193 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.193 09:32:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.100 09:32:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:08.100 00:21:08.100 real 0m8.838s 00:21:08.100 user 0m5.473s 00:21:08.100 sys 0m4.459s 00:21:08.100 09:32:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.100 09:32:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:08.100 ************************************ 00:21:08.100 END TEST nvmf_aer 00:21:08.100 ************************************ 00:21:08.100 09:32:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:08.100 09:32:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:08.100 09:32:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.100 09:32:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.100 ************************************ 00:21:08.100 START TEST nvmf_async_init 00:21:08.100 ************************************ 00:21:08.100 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:08.360 * Looking for test storage... 00:21:08.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:08.360 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:08.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.361 --rc genhtml_branch_coverage=1 00:21:08.361 --rc genhtml_function_coverage=1 00:21:08.361 --rc genhtml_legend=1 00:21:08.361 --rc geninfo_all_blocks=1 00:21:08.361 --rc geninfo_unexecuted_blocks=1 00:21:08.361 00:21:08.361 ' 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:08.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.361 --rc genhtml_branch_coverage=1 00:21:08.361 --rc genhtml_function_coverage=1 00:21:08.361 --rc genhtml_legend=1 00:21:08.361 --rc geninfo_all_blocks=1 00:21:08.361 --rc geninfo_unexecuted_blocks=1 00:21:08.361 00:21:08.361 ' 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:08.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.361 --rc genhtml_branch_coverage=1 00:21:08.361 --rc genhtml_function_coverage=1 00:21:08.361 --rc genhtml_legend=1 00:21:08.361 --rc geninfo_all_blocks=1 00:21:08.361 --rc geninfo_unexecuted_blocks=1 00:21:08.361 00:21:08.361 ' 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:08.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.361 --rc genhtml_branch_coverage=1 00:21:08.361 --rc genhtml_function_coverage=1 00:21:08.361 --rc genhtml_legend=1 00:21:08.361 --rc geninfo_all_blocks=1 00:21:08.361 --rc geninfo_unexecuted_blocks=1 00:21:08.361 00:21:08.361 ' 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:08.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e3ceb5dc64fe430687038ba61d0640d6 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:08.361 09:32:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:13.711 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:13.711 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:13.711 Found net devices under 0000:af:00.0: cvl_0_0 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:13.711 Found net devices under 0000:af:00.1: cvl_0_1 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:13.711 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:13.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:21:13.712 00:21:13.712 --- 10.0.0.2 ping statistics --- 00:21:13.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.712 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:21:13.712 00:21:13.712 --- 10.0.0.1 ping statistics --- 00:21:13.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.712 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3399750 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3399750 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3399750 ']' 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.712 09:32:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.712 [2024-12-13 09:32:26.024075] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:21:13.712 [2024-12-13 09:32:26.024116] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.971 [2024-12-13 09:32:26.091457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.971 [2024-12-13 09:32:26.133350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.971 [2024-12-13 09:32:26.133383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.971 [2024-12-13 09:32:26.133390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.971 [2024-12-13 09:32:26.133396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.971 [2024-12-13 09:32:26.133401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.971 [2024-12-13 09:32:26.133889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.972 [2024-12-13 09:32:26.265571] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.972 null0 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e3ceb5dc64fe430687038ba61d0640d6 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.972 [2024-12-13 09:32:26.317838] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.972 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.231 nvme0n1 00:21:14.231 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.231 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:14.231 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.231 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.231 [ 00:21:14.231 { 00:21:14.231 "name": "nvme0n1", 00:21:14.231 "aliases": [ 00:21:14.231 "e3ceb5dc-64fe-4306-8703-8ba61d0640d6" 00:21:14.231 ], 00:21:14.231 "product_name": "NVMe disk", 00:21:14.231 "block_size": 512, 00:21:14.231 "num_blocks": 2097152, 00:21:14.231 "uuid": "e3ceb5dc-64fe-4306-8703-8ba61d0640d6", 00:21:14.231 "numa_id": 1, 00:21:14.231 "assigned_rate_limits": { 00:21:14.231 "rw_ios_per_sec": 0, 00:21:14.231 "rw_mbytes_per_sec": 0, 00:21:14.231 "r_mbytes_per_sec": 0, 00:21:14.231 "w_mbytes_per_sec": 0 00:21:14.231 }, 00:21:14.231 "claimed": false, 00:21:14.231 "zoned": false, 00:21:14.231 "supported_io_types": { 00:21:14.231 "read": true, 00:21:14.231 "write": true, 00:21:14.231 "unmap": false, 00:21:14.231 "flush": true, 00:21:14.231 "reset": true, 00:21:14.231 "nvme_admin": true, 00:21:14.231 "nvme_io": true, 00:21:14.231 "nvme_io_md": false, 00:21:14.231 "write_zeroes": true, 00:21:14.231 "zcopy": false, 00:21:14.231 "get_zone_info": false, 00:21:14.231 "zone_management": false, 00:21:14.231 "zone_append": false, 00:21:14.231 "compare": true, 00:21:14.231 "compare_and_write": true, 00:21:14.231 "abort": true, 00:21:14.231 "seek_hole": false, 00:21:14.231 "seek_data": false, 00:21:14.231 "copy": true, 00:21:14.231 "nvme_iov_md": false 00:21:14.231 }, 00:21:14.231 "memory_domains": [ 00:21:14.231 { 00:21:14.231 "dma_device_id": "system", 00:21:14.231 "dma_device_type": 1 00:21:14.231 } 00:21:14.231 ], 00:21:14.231 "driver_specific": { 00:21:14.231 "nvme": [ 00:21:14.231 { 00:21:14.231 "trid": { 00:21:14.231 "trtype": "TCP", 00:21:14.231 "adrfam": "IPv4", 00:21:14.231 "traddr": "10.0.0.2", 00:21:14.231 "trsvcid": "4420", 00:21:14.231 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:14.231 }, 00:21:14.231 "ctrlr_data": { 00:21:14.231 "cntlid": 1, 00:21:14.231 "vendor_id": "0x8086", 00:21:14.231 "model_number": "SPDK bdev Controller", 00:21:14.231 "serial_number": "00000000000000000000", 00:21:14.231 "firmware_revision": "25.01", 00:21:14.231 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:14.231 "oacs": { 00:21:14.231 "security": 0, 00:21:14.231 "format": 0, 00:21:14.231 "firmware": 0, 00:21:14.231 "ns_manage": 0 00:21:14.231 }, 00:21:14.231 "multi_ctrlr": true, 00:21:14.231 "ana_reporting": false 00:21:14.231 }, 00:21:14.231 "vs": { 00:21:14.231 "nvme_version": "1.3" 00:21:14.231 }, 00:21:14.231 "ns_data": { 00:21:14.231 "id": 1, 00:21:14.231 "can_share": true 00:21:14.231 } 00:21:14.231 } 00:21:14.231 ], 00:21:14.232 "mp_policy": "active_passive" 00:21:14.232 } 00:21:14.232 } 00:21:14.232 ] 00:21:14.232 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.232 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:14.232 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.232 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.232 [2024-12-13 09:32:26.582382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:14.232 [2024-12-13 09:32:26.582461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1c250 (9): Bad file descriptor 00:21:14.491 [2024-12-13 09:32:26.714521] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.491 [ 00:21:14.491 { 00:21:14.491 "name": "nvme0n1", 00:21:14.491 "aliases": [ 00:21:14.491 "e3ceb5dc-64fe-4306-8703-8ba61d0640d6" 00:21:14.491 ], 00:21:14.491 "product_name": "NVMe disk", 00:21:14.491 "block_size": 512, 00:21:14.491 "num_blocks": 2097152, 00:21:14.491 "uuid": "e3ceb5dc-64fe-4306-8703-8ba61d0640d6", 00:21:14.491 "numa_id": 1, 00:21:14.491 "assigned_rate_limits": { 00:21:14.491 "rw_ios_per_sec": 0, 00:21:14.491 "rw_mbytes_per_sec": 0, 00:21:14.491 "r_mbytes_per_sec": 0, 00:21:14.491 "w_mbytes_per_sec": 0 00:21:14.491 }, 00:21:14.491 "claimed": false, 00:21:14.491 "zoned": false, 00:21:14.491 "supported_io_types": { 00:21:14.491 "read": true, 00:21:14.491 "write": true, 00:21:14.491 "unmap": false, 00:21:14.491 "flush": true, 00:21:14.491 "reset": true, 00:21:14.491 "nvme_admin": true, 00:21:14.491 "nvme_io": true, 00:21:14.491 "nvme_io_md": false, 00:21:14.491 "write_zeroes": true, 00:21:14.491 "zcopy": false, 00:21:14.491 "get_zone_info": false, 00:21:14.491 "zone_management": false, 00:21:14.491 "zone_append": false, 00:21:14.491 "compare": true, 00:21:14.491 "compare_and_write": true, 00:21:14.491 "abort": true, 00:21:14.491 "seek_hole": false, 00:21:14.491 "seek_data": false, 00:21:14.491 "copy": true, 00:21:14.491 "nvme_iov_md": false 00:21:14.491 }, 00:21:14.491 "memory_domains": [ 00:21:14.491 { 00:21:14.491 "dma_device_id": "system", 00:21:14.491 "dma_device_type": 1 00:21:14.491 } 00:21:14.491 ], 00:21:14.491 "driver_specific": { 00:21:14.491 "nvme": [ 00:21:14.491 { 00:21:14.491 "trid": { 00:21:14.491 "trtype": "TCP", 00:21:14.491 "adrfam": "IPv4", 00:21:14.491 "traddr": "10.0.0.2", 00:21:14.491 "trsvcid": "4420", 00:21:14.491 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:14.491 }, 00:21:14.491 "ctrlr_data": { 00:21:14.491 "cntlid": 2, 00:21:14.491 "vendor_id": "0x8086", 00:21:14.491 "model_number": "SPDK bdev Controller", 00:21:14.491 "serial_number": "00000000000000000000", 00:21:14.491 "firmware_revision": "25.01", 00:21:14.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:14.491 "oacs": { 00:21:14.491 "security": 0, 00:21:14.491 "format": 0, 00:21:14.491 "firmware": 0, 00:21:14.491 "ns_manage": 0 00:21:14.491 }, 00:21:14.491 "multi_ctrlr": true, 00:21:14.491 "ana_reporting": false 00:21:14.491 }, 00:21:14.491 "vs": { 00:21:14.491 "nvme_version": "1.3" 00:21:14.491 }, 00:21:14.491 "ns_data": { 00:21:14.491 "id": 1, 00:21:14.491 "can_share": true 00:21:14.491 } 00:21:14.491 } 00:21:14.491 ], 00:21:14.491 "mp_policy": "active_passive" 00:21:14.491 } 00:21:14.491 } 00:21:14.491 ] 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.oz1hQMJFrW 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.oz1hQMJFrW 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.oz1hQMJFrW 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.491 [2024-12-13 09:32:26.786996] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.491 [2024-12-13 09:32:26.787098] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.491 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.491 [2024-12-13 09:32:26.807061] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.751 nvme0n1 00:21:14.751 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.751 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:14.751 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.751 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.751 [ 00:21:14.751 { 00:21:14.751 "name": "nvme0n1", 00:21:14.751 "aliases": [ 00:21:14.751 "e3ceb5dc-64fe-4306-8703-8ba61d0640d6" 00:21:14.751 ], 00:21:14.751 "product_name": "NVMe disk", 00:21:14.751 "block_size": 512, 00:21:14.751 "num_blocks": 2097152, 00:21:14.751 "uuid": "e3ceb5dc-64fe-4306-8703-8ba61d0640d6", 00:21:14.751 "numa_id": 1, 00:21:14.751 "assigned_rate_limits": { 00:21:14.751 "rw_ios_per_sec": 0, 00:21:14.751 "rw_mbytes_per_sec": 0, 00:21:14.751 "r_mbytes_per_sec": 0, 00:21:14.751 "w_mbytes_per_sec": 0 00:21:14.751 }, 00:21:14.751 "claimed": false, 00:21:14.751 "zoned": false, 00:21:14.751 "supported_io_types": { 00:21:14.751 "read": true, 00:21:14.751 "write": true, 00:21:14.751 "unmap": false, 00:21:14.751 "flush": true, 00:21:14.751 "reset": true, 00:21:14.751 "nvme_admin": true, 00:21:14.751 "nvme_io": true, 00:21:14.751 "nvme_io_md": false, 00:21:14.751 "write_zeroes": true, 00:21:14.751 "zcopy": false, 00:21:14.751 "get_zone_info": false, 00:21:14.751 "zone_management": false, 00:21:14.751 "zone_append": false, 00:21:14.751 "compare": true, 00:21:14.751 "compare_and_write": true, 00:21:14.751 "abort": true, 00:21:14.751 "seek_hole": false, 00:21:14.751 "seek_data": false, 00:21:14.751 "copy": true, 00:21:14.751 "nvme_iov_md": false 00:21:14.751 }, 00:21:14.751 "memory_domains": [ 00:21:14.751 { 00:21:14.751 "dma_device_id": "system", 00:21:14.751 "dma_device_type": 1 00:21:14.751 } 00:21:14.751 ], 00:21:14.751 "driver_specific": { 00:21:14.751 "nvme": [ 00:21:14.751 { 00:21:14.751 "trid": { 00:21:14.751 "trtype": "TCP", 00:21:14.751 "adrfam": "IPv4", 00:21:14.751 "traddr": "10.0.0.2", 00:21:14.751 "trsvcid": "4421", 00:21:14.751 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:14.751 }, 00:21:14.751 "ctrlr_data": { 00:21:14.751 "cntlid": 3, 00:21:14.751 "vendor_id": "0x8086", 00:21:14.751 "model_number": "SPDK bdev Controller", 00:21:14.751 "serial_number": "00000000000000000000", 00:21:14.751 "firmware_revision": "25.01", 00:21:14.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:14.751 "oacs": { 00:21:14.751 "security": 0, 00:21:14.751 "format": 0, 00:21:14.751 "firmware": 0, 00:21:14.751 "ns_manage": 0 00:21:14.751 }, 00:21:14.751 "multi_ctrlr": true, 00:21:14.751 "ana_reporting": false 00:21:14.751 }, 00:21:14.751 "vs": { 00:21:14.751 "nvme_version": "1.3" 00:21:14.751 }, 00:21:14.751 "ns_data": { 00:21:14.751 "id": 1, 00:21:14.751 "can_share": true 00:21:14.751 } 00:21:14.751 } 00:21:14.751 ], 00:21:14.752 "mp_policy": "active_passive" 00:21:14.752 } 00:21:14.752 } 00:21:14.752 ] 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.oz1hQMJFrW 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:14.752 rmmod nvme_tcp 00:21:14.752 rmmod nvme_fabrics 00:21:14.752 rmmod nvme_keyring 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3399750 ']' 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3399750 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3399750 ']' 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3399750 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.752 09:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3399750 00:21:14.752 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.752 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.752 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3399750' 00:21:14.752 killing process with pid 3399750 00:21:14.752 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3399750 00:21:14.752 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3399750 00:21:15.011 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:15.011 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:15.011 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:15.011 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:15.011 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:15.011 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:15.011 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:15.011 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:15.011 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:15.011 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.011 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.011 09:32:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.914 09:32:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.914 00:21:16.914 real 0m8.775s 00:21:16.914 user 0m2.866s 00:21:16.914 sys 0m4.348s 00:21:16.914 09:32:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.914 09:32:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:16.914 ************************************ 00:21:16.914 END TEST nvmf_async_init 00:21:16.914 ************************************ 00:21:16.914 09:32:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:16.914 09:32:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:16.914 09:32:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.914 09:32:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.173 ************************************ 00:21:17.173 START TEST dma 00:21:17.173 ************************************ 00:21:17.173 09:32:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:17.173 * Looking for test storage... 00:21:17.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:17.173 09:32:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:17.173 09:32:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:21:17.173 09:32:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:17.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.174 --rc genhtml_branch_coverage=1 00:21:17.174 --rc genhtml_function_coverage=1 00:21:17.174 --rc genhtml_legend=1 00:21:17.174 --rc geninfo_all_blocks=1 00:21:17.174 --rc geninfo_unexecuted_blocks=1 00:21:17.174 00:21:17.174 ' 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:17.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.174 --rc genhtml_branch_coverage=1 00:21:17.174 --rc genhtml_function_coverage=1 00:21:17.174 --rc genhtml_legend=1 00:21:17.174 --rc geninfo_all_blocks=1 00:21:17.174 --rc geninfo_unexecuted_blocks=1 00:21:17.174 00:21:17.174 ' 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:17.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.174 --rc genhtml_branch_coverage=1 00:21:17.174 --rc genhtml_function_coverage=1 00:21:17.174 --rc genhtml_legend=1 00:21:17.174 --rc geninfo_all_blocks=1 00:21:17.174 --rc geninfo_unexecuted_blocks=1 00:21:17.174 00:21:17.174 ' 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:17.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.174 --rc genhtml_branch_coverage=1 00:21:17.174 --rc genhtml_function_coverage=1 00:21:17.174 --rc genhtml_legend=1 00:21:17.174 --rc geninfo_all_blocks=1 00:21:17.174 --rc geninfo_unexecuted_blocks=1 00:21:17.174 00:21:17.174 ' 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:17.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:17.174 09:32:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:17.174 00:21:17.174 real 0m0.205s 00:21:17.175 user 0m0.129s 00:21:17.175 sys 0m0.090s 00:21:17.175 09:32:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:17.175 09:32:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:17.175 ************************************ 00:21:17.175 END TEST dma 00:21:17.175 ************************************ 00:21:17.175 09:32:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:17.175 09:32:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:17.175 09:32:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:17.175 09:32:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.434 ************************************ 00:21:17.434 START TEST nvmf_identify 00:21:17.434 ************************************ 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:17.434 * Looking for test storage... 00:21:17.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:17.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.434 --rc genhtml_branch_coverage=1 00:21:17.434 --rc genhtml_function_coverage=1 00:21:17.434 --rc genhtml_legend=1 00:21:17.434 --rc geninfo_all_blocks=1 00:21:17.434 --rc geninfo_unexecuted_blocks=1 00:21:17.434 00:21:17.434 ' 00:21:17.434 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:17.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.435 --rc genhtml_branch_coverage=1 00:21:17.435 --rc genhtml_function_coverage=1 00:21:17.435 --rc genhtml_legend=1 00:21:17.435 --rc geninfo_all_blocks=1 00:21:17.435 --rc geninfo_unexecuted_blocks=1 00:21:17.435 00:21:17.435 ' 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:17.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.435 --rc genhtml_branch_coverage=1 00:21:17.435 --rc genhtml_function_coverage=1 00:21:17.435 --rc genhtml_legend=1 00:21:17.435 --rc geninfo_all_blocks=1 00:21:17.435 --rc geninfo_unexecuted_blocks=1 00:21:17.435 00:21:17.435 ' 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:17.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.435 --rc genhtml_branch_coverage=1 00:21:17.435 --rc genhtml_function_coverage=1 00:21:17.435 --rc genhtml_legend=1 00:21:17.435 --rc geninfo_all_blocks=1 00:21:17.435 --rc geninfo_unexecuted_blocks=1 00:21:17.435 00:21:17.435 ' 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:17.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:17.435 09:32:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.009 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:24.010 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:24.010 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:24.010 Found net devices under 0000:af:00.0: cvl_0_0 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:24.010 Found net devices under 0000:af:00.1: cvl_0_1 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:24.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:21:24.010 00:21:24.010 --- 10.0.0.2 ping statistics --- 00:21:24.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.010 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:21:24.010 00:21:24.010 --- 10.0.0.1 ping statistics --- 00:21:24.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.010 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3403513 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3403513 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3403513 ']' 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.010 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.010 [2024-12-13 09:32:35.618009] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:21:24.011 [2024-12-13 09:32:35.618052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.011 [2024-12-13 09:32:35.684412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.011 [2024-12-13 09:32:35.728029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.011 [2024-12-13 09:32:35.728065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.011 [2024-12-13 09:32:35.728074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.011 [2024-12-13 09:32:35.728083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.011 [2024-12-13 09:32:35.728088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.011 [2024-12-13 09:32:35.729558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.011 [2024-12-13 09:32:35.729655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.011 [2024-12-13 09:32:35.729765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.011 [2024-12-13 09:32:35.729766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.011 [2024-12-13 09:32:35.839493] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.011 Malloc0 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.011 [2024-12-13 09:32:35.943758] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.011 [ 00:21:24.011 { 00:21:24.011 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:24.011 "subtype": "Discovery", 00:21:24.011 "listen_addresses": [ 00:21:24.011 { 00:21:24.011 "trtype": "TCP", 00:21:24.011 "adrfam": "IPv4", 00:21:24.011 "traddr": "10.0.0.2", 00:21:24.011 "trsvcid": "4420" 00:21:24.011 } 00:21:24.011 ], 00:21:24.011 "allow_any_host": true, 00:21:24.011 "hosts": [] 00:21:24.011 }, 00:21:24.011 { 00:21:24.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.011 "subtype": "NVMe", 00:21:24.011 "listen_addresses": [ 00:21:24.011 { 00:21:24.011 "trtype": "TCP", 00:21:24.011 "adrfam": "IPv4", 00:21:24.011 "traddr": "10.0.0.2", 00:21:24.011 "trsvcid": "4420" 00:21:24.011 } 00:21:24.011 ], 00:21:24.011 "allow_any_host": true, 00:21:24.011 "hosts": [], 00:21:24.011 "serial_number": "SPDK00000000000001", 00:21:24.011 "model_number": "SPDK bdev Controller", 00:21:24.011 "max_namespaces": 32, 00:21:24.011 "min_cntlid": 1, 00:21:24.011 "max_cntlid": 65519, 00:21:24.011 "namespaces": [ 00:21:24.011 { 00:21:24.011 "nsid": 1, 00:21:24.011 "bdev_name": "Malloc0", 00:21:24.011 "name": "Malloc0", 00:21:24.011 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:24.011 "eui64": "ABCDEF0123456789", 00:21:24.011 "uuid": "4daaa46f-8ce8-44b7-95a0-b15567948903" 00:21:24.011 } 00:21:24.011 ] 00:21:24.011 } 00:21:24.011 ] 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.011 09:32:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:24.011 [2024-12-13 09:32:35.995240] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:21:24.011 [2024-12-13 09:32:35.995280] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403655 ] 00:21:24.011 [2024-12-13 09:32:36.035970] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:24.011 [2024-12-13 09:32:36.036011] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:24.011 [2024-12-13 09:32:36.036016] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:24.011 [2024-12-13 09:32:36.036027] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:24.011 [2024-12-13 09:32:36.036036] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:24.011 [2024-12-13 09:32:36.039679] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:24.011 [2024-12-13 09:32:36.039716] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x164f690 0 00:21:24.011 [2024-12-13 09:32:36.047458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:24.011 [2024-12-13 09:32:36.047472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:24.011 [2024-12-13 09:32:36.047476] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:24.011 [2024-12-13 09:32:36.047479] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:24.011 [2024-12-13 09:32:36.047513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.011 [2024-12-13 09:32:36.047518] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.011 [2024-12-13 09:32:36.047522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164f690) 00:21:24.011 [2024-12-13 09:32:36.047533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:24.011 [2024-12-13 09:32:36.047550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1100, cid 0, qid 0 00:21:24.011 [2024-12-13 09:32:36.055458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.011 [2024-12-13 09:32:36.055467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.011 [2024-12-13 09:32:36.055470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.011 [2024-12-13 09:32:36.055475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1100) on tqpair=0x164f690 00:21:24.011 [2024-12-13 09:32:36.055485] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:24.011 [2024-12-13 09:32:36.055491] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:24.011 [2024-12-13 09:32:36.055499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:24.011 [2024-12-13 09:32:36.055511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.011 [2024-12-13 09:32:36.055514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.011 [2024-12-13 09:32:36.055518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164f690) 00:21:24.011 [2024-12-13 09:32:36.055524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.011 [2024-12-13 09:32:36.055537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1100, cid 0, qid 0 00:21:24.011 [2024-12-13 09:32:36.055658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.011 [2024-12-13 09:32:36.055664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.011 [2024-12-13 09:32:36.055667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.011 [2024-12-13 09:32:36.055670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1100) on tqpair=0x164f690 00:21:24.011 [2024-12-13 09:32:36.055675] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:24.011 [2024-12-13 09:32:36.055681] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:24.011 [2024-12-13 09:32:36.055687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.011 [2024-12-13 09:32:36.055690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.011 [2024-12-13 09:32:36.055694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164f690) 00:21:24.011 [2024-12-13 09:32:36.055699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.011 [2024-12-13 09:32:36.055709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1100, cid 0, qid 0 00:21:24.011 [2024-12-13 09:32:36.055805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.011 [2024-12-13 09:32:36.055810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.012 [2024-12-13 09:32:36.055813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.055816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1100) on tqpair=0x164f690 00:21:24.012 [2024-12-13 09:32:36.055820] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:24.012 [2024-12-13 09:32:36.055828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:24.012 [2024-12-13 09:32:36.055834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.055837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.055840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164f690) 00:21:24.012 [2024-12-13 09:32:36.055845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.012 [2024-12-13 09:32:36.055855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1100, cid 0, qid 0 00:21:24.012 [2024-12-13 09:32:36.055915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.012 [2024-12-13 09:32:36.055921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.012 [2024-12-13 09:32:36.055924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.055927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1100) on tqpair=0x164f690 00:21:24.012 [2024-12-13 09:32:36.055931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:24.012 [2024-12-13 09:32:36.055942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.055945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.055948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164f690) 00:21:24.012 [2024-12-13 09:32:36.055954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.012 [2024-12-13 09:32:36.055963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1100, cid 0, qid 0 00:21:24.012 [2024-12-13 09:32:36.056056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.012 [2024-12-13 09:32:36.056062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.012 [2024-12-13 09:32:36.056065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1100) on tqpair=0x164f690 00:21:24.012 [2024-12-13 09:32:36.056072] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:24.012 [2024-12-13 09:32:36.056077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:24.012 [2024-12-13 09:32:36.056083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:24.012 [2024-12-13 09:32:36.056191] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:24.012 [2024-12-13 09:32:36.056195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:24.012 [2024-12-13 09:32:36.056202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164f690) 00:21:24.012 [2024-12-13 09:32:36.056214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.012 [2024-12-13 09:32:36.056224] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1100, cid 0, qid 0 00:21:24.012 [2024-12-13 09:32:36.056291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.012 [2024-12-13 09:32:36.056296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.012 [2024-12-13 09:32:36.056299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1100) on tqpair=0x164f690 00:21:24.012 [2024-12-13 09:32:36.056307] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:24.012 [2024-12-13 09:32:36.056314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164f690) 00:21:24.012 [2024-12-13 09:32:36.056326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.012 [2024-12-13 09:32:36.056335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1100, cid 0, qid 0 00:21:24.012 [2024-12-13 09:32:36.056440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.012 [2024-12-13 09:32:36.056446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.012 [2024-12-13 09:32:36.056453] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1100) on tqpair=0x164f690 00:21:24.012 [2024-12-13 09:32:36.056461] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:24.012 [2024-12-13 09:32:36.056467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:24.012 [2024-12-13 09:32:36.056473] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:24.012 [2024-12-13 09:32:36.056484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:24.012 [2024-12-13 09:32:36.056494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164f690) 00:21:24.012 [2024-12-13 09:32:36.056503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.012 [2024-12-13 09:32:36.056513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1100, cid 0, qid 0 00:21:24.012 [2024-12-13 09:32:36.056599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.012 [2024-12-13 09:32:36.056604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.012 [2024-12-13 09:32:36.056607] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056611] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x164f690): datao=0, datal=4096, cccid=0 00:21:24.012 [2024-12-13 09:32:36.056615] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16b1100) on tqpair(0x164f690): expected_datao=0, payload_size=4096 00:21:24.012 [2024-12-13 09:32:36.056619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056646] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056651] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.012 [2024-12-13 09:32:36.056697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.012 [2024-12-13 09:32:36.056700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1100) on tqpair=0x164f690 00:21:24.012 [2024-12-13 09:32:36.056710] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:24.012 [2024-12-13 09:32:36.056715] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:24.012 [2024-12-13 09:32:36.056719] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:24.012 [2024-12-13 09:32:36.056723] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:24.012 [2024-12-13 09:32:36.056727] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:24.012 [2024-12-13 09:32:36.056731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:24.012 [2024-12-13 09:32:36.056738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:24.012 [2024-12-13 09:32:36.056744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056751] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164f690) 00:21:24.012 [2024-12-13 09:32:36.056757] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.012 [2024-12-13 09:32:36.056770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1100, cid 0, qid 0 00:21:24.012 [2024-12-13 09:32:36.056842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.012 [2024-12-13 09:32:36.056848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.012 [2024-12-13 09:32:36.056851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1100) on tqpair=0x164f690 00:21:24.012 [2024-12-13 09:32:36.056861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x164f690) 00:21:24.012 [2024-12-13 09:32:36.056872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.012 [2024-12-13 09:32:36.056877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x164f690) 00:21:24.012 [2024-12-13 09:32:36.056889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.012 [2024-12-13 09:32:36.056893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x164f690) 00:21:24.012 [2024-12-13 09:32:36.056904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.012 [2024-12-13 09:32:36.056909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.012 [2024-12-13 09:32:36.056916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164f690) 00:21:24.012 [2024-12-13 09:32:36.056920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.013 [2024-12-13 09:32:36.056925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:24.013 [2024-12-13 09:32:36.056935] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:24.013 [2024-12-13 09:32:36.056940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.056944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x164f690) 00:21:24.013 [2024-12-13 09:32:36.056949] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.013 [2024-12-13 09:32:36.056959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1100, cid 0, qid 0 00:21:24.013 [2024-12-13 09:32:36.056964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1280, cid 1, qid 0 00:21:24.013 [2024-12-13 09:32:36.056968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1400, cid 2, qid 0 00:21:24.013 [2024-12-13 09:32:36.056972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1580, cid 3, qid 0 00:21:24.013 [2024-12-13 09:32:36.056976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1700, cid 4, qid 0 00:21:24.013 [2024-12-13 09:32:36.057102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.013 [2024-12-13 09:32:36.057108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.013 [2024-12-13 09:32:36.057111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.057114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1700) on tqpair=0x164f690 00:21:24.013 [2024-12-13 09:32:36.057120] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:24.013 [2024-12-13 09:32:36.057125] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:24.013 [2024-12-13 09:32:36.057134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.057138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x164f690) 00:21:24.013 [2024-12-13 09:32:36.057143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.013 [2024-12-13 09:32:36.057153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1700, cid 4, qid 0 00:21:24.013 [2024-12-13 09:32:36.057229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.013 [2024-12-13 09:32:36.057235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.013 [2024-12-13 09:32:36.057238] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.057241] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x164f690): datao=0, datal=4096, cccid=4 00:21:24.013 [2024-12-13 09:32:36.057245] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16b1700) on tqpair(0x164f690): expected_datao=0, payload_size=4096 00:21:24.013 [2024-12-13 09:32:36.057249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.057254] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.057257] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.057298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.013 [2024-12-13 09:32:36.057303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.013 [2024-12-13 09:32:36.057306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.057309] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1700) on tqpair=0x164f690 00:21:24.013 [2024-12-13 09:32:36.057320] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:24.013 [2024-12-13 09:32:36.057341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.057345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x164f690) 00:21:24.013 [2024-12-13 09:32:36.057351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.013 [2024-12-13 09:32:36.057356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.057360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.057363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x164f690) 00:21:24.013 [2024-12-13 09:32:36.057368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.013 [2024-12-13 09:32:36.057381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1700, cid 4, qid 0 00:21:24.013 [2024-12-13 09:32:36.057385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1880, cid 5, qid 0 00:21:24.013 [2024-12-13 09:32:36.057504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.013 [2024-12-13 09:32:36.057511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.013 [2024-12-13 09:32:36.057514] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.057517] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x164f690): datao=0, datal=1024, cccid=4 00:21:24.013 [2024-12-13 09:32:36.057521] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16b1700) on tqpair(0x164f690): expected_datao=0, payload_size=1024 00:21:24.013 [2024-12-13 09:32:36.057527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.057533] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.057536] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.057541] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.013 [2024-12-13 09:32:36.057545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.013 [2024-12-13 09:32:36.057548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.057551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1880) on tqpair=0x164f690 00:21:24.013 [2024-12-13 09:32:36.103457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.013 [2024-12-13 09:32:36.103467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.013 [2024-12-13 09:32:36.103470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.103473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1700) on tqpair=0x164f690 00:21:24.013 [2024-12-13 09:32:36.103484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.103488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x164f690) 00:21:24.013 [2024-12-13 09:32:36.103495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.013 [2024-12-13 09:32:36.103512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1700, cid 4, qid 0 00:21:24.013 [2024-12-13 09:32:36.103598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.013 [2024-12-13 09:32:36.103604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.013 [2024-12-13 09:32:36.103607] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.103610] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x164f690): datao=0, datal=3072, cccid=4 00:21:24.013 [2024-12-13 09:32:36.103614] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16b1700) on tqpair(0x164f690): expected_datao=0, payload_size=3072 00:21:24.013 [2024-12-13 09:32:36.103618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.103628] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.103632] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.103714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.013 [2024-12-13 09:32:36.103719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.013 [2024-12-13 09:32:36.103722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.103726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1700) on tqpair=0x164f690 00:21:24.013 [2024-12-13 09:32:36.103733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.103736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x164f690) 00:21:24.013 [2024-12-13 09:32:36.103742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.013 [2024-12-13 09:32:36.103755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1700, cid 4, qid 0 00:21:24.013 [2024-12-13 09:32:36.103829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.013 [2024-12-13 09:32:36.103834] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.013 [2024-12-13 09:32:36.103837] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.103840] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x164f690): datao=0, datal=8, cccid=4 00:21:24.013 [2024-12-13 09:32:36.103844] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16b1700) on tqpair(0x164f690): expected_datao=0, payload_size=8 00:21:24.013 [2024-12-13 09:32:36.103847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.103855] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.103859] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.145581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.013 [2024-12-13 09:32:36.145592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.013 [2024-12-13 09:32:36.145595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.013 [2024-12-13 09:32:36.145598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1700) on tqpair=0x164f690 00:21:24.013 ===================================================== 00:21:24.013 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:24.013 ===================================================== 00:21:24.013 Controller Capabilities/Features 00:21:24.013 ================================ 00:21:24.013 Vendor ID: 0000 00:21:24.013 Subsystem Vendor ID: 0000 00:21:24.013 Serial Number: .................... 00:21:24.013 Model Number: ........................................ 00:21:24.013 Firmware Version: 25.01 00:21:24.013 Recommended Arb Burst: 0 00:21:24.013 IEEE OUI Identifier: 00 00 00 00:21:24.013 Multi-path I/O 00:21:24.013 May have multiple subsystem ports: No 00:21:24.013 May have multiple controllers: No 00:21:24.013 Associated with SR-IOV VF: No 00:21:24.013 Max Data Transfer Size: 131072 00:21:24.013 Max Number of Namespaces: 0 00:21:24.013 Max Number of I/O Queues: 1024 00:21:24.013 NVMe Specification Version (VS): 1.3 00:21:24.013 NVMe Specification Version (Identify): 1.3 00:21:24.013 Maximum Queue Entries: 128 00:21:24.013 Contiguous Queues Required: Yes 00:21:24.013 Arbitration Mechanisms Supported 00:21:24.013 Weighted Round Robin: Not Supported 00:21:24.013 Vendor Specific: Not Supported 00:21:24.013 Reset Timeout: 15000 ms 00:21:24.013 Doorbell Stride: 4 bytes 00:21:24.013 NVM Subsystem Reset: Not Supported 00:21:24.013 Command Sets Supported 00:21:24.013 NVM Command Set: Supported 00:21:24.014 Boot Partition: Not Supported 00:21:24.014 Memory Page Size Minimum: 4096 bytes 00:21:24.014 Memory Page Size Maximum: 4096 bytes 00:21:24.014 Persistent Memory Region: Not Supported 00:21:24.014 Optional Asynchronous Events Supported 00:21:24.014 Namespace Attribute Notices: Not Supported 00:21:24.014 Firmware Activation Notices: Not Supported 00:21:24.014 ANA Change Notices: Not Supported 00:21:24.014 PLE Aggregate Log Change Notices: Not Supported 00:21:24.014 LBA Status Info Alert Notices: Not Supported 00:21:24.014 EGE Aggregate Log Change Notices: Not Supported 00:21:24.014 Normal NVM Subsystem Shutdown event: Not Supported 00:21:24.014 Zone Descriptor Change Notices: Not Supported 00:21:24.014 Discovery Log Change Notices: Supported 00:21:24.014 Controller Attributes 00:21:24.014 128-bit Host Identifier: Not Supported 00:21:24.014 Non-Operational Permissive Mode: Not Supported 00:21:24.014 NVM Sets: Not Supported 00:21:24.014 Read Recovery Levels: Not Supported 00:21:24.014 Endurance Groups: Not Supported 00:21:24.014 Predictable Latency Mode: Not Supported 00:21:24.014 Traffic Based Keep ALive: Not Supported 00:21:24.014 Namespace Granularity: Not Supported 00:21:24.014 SQ Associations: Not Supported 00:21:24.014 UUID List: Not Supported 00:21:24.014 Multi-Domain Subsystem: Not Supported 00:21:24.014 Fixed Capacity Management: Not Supported 00:21:24.014 Variable Capacity Management: Not Supported 00:21:24.014 Delete Endurance Group: Not Supported 00:21:24.014 Delete NVM Set: Not Supported 00:21:24.014 Extended LBA Formats Supported: Not Supported 00:21:24.014 Flexible Data Placement Supported: Not Supported 00:21:24.014 00:21:24.014 Controller Memory Buffer Support 00:21:24.014 ================================ 00:21:24.014 Supported: No 00:21:24.014 00:21:24.014 Persistent Memory Region Support 00:21:24.014 ================================ 00:21:24.014 Supported: No 00:21:24.014 00:21:24.014 Admin Command Set Attributes 00:21:24.014 ============================ 00:21:24.014 Security Send/Receive: Not Supported 00:21:24.014 Format NVM: Not Supported 00:21:24.014 Firmware Activate/Download: Not Supported 00:21:24.014 Namespace Management: Not Supported 00:21:24.014 Device Self-Test: Not Supported 00:21:24.014 Directives: Not Supported 00:21:24.014 NVMe-MI: Not Supported 00:21:24.014 Virtualization Management: Not Supported 00:21:24.014 Doorbell Buffer Config: Not Supported 00:21:24.014 Get LBA Status Capability: Not Supported 00:21:24.014 Command & Feature Lockdown Capability: Not Supported 00:21:24.014 Abort Command Limit: 1 00:21:24.014 Async Event Request Limit: 4 00:21:24.014 Number of Firmware Slots: N/A 00:21:24.014 Firmware Slot 1 Read-Only: N/A 00:21:24.014 Firmware Activation Without Reset: N/A 00:21:24.014 Multiple Update Detection Support: N/A 00:21:24.014 Firmware Update Granularity: No Information Provided 00:21:24.014 Per-Namespace SMART Log: No 00:21:24.014 Asymmetric Namespace Access Log Page: Not Supported 00:21:24.014 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:24.014 Command Effects Log Page: Not Supported 00:21:24.014 Get Log Page Extended Data: Supported 00:21:24.014 Telemetry Log Pages: Not Supported 00:21:24.014 Persistent Event Log Pages: Not Supported 00:21:24.014 Supported Log Pages Log Page: May Support 00:21:24.014 Commands Supported & Effects Log Page: Not Supported 00:21:24.014 Feature Identifiers & Effects Log Page:May Support 00:21:24.014 NVMe-MI Commands & Effects Log Page: May Support 00:21:24.014 Data Area 4 for Telemetry Log: Not Supported 00:21:24.014 Error Log Page Entries Supported: 128 00:21:24.014 Keep Alive: Not Supported 00:21:24.014 00:21:24.014 NVM Command Set Attributes 00:21:24.014 ========================== 00:21:24.014 Submission Queue Entry Size 00:21:24.014 Max: 1 00:21:24.014 Min: 1 00:21:24.014 Completion Queue Entry Size 00:21:24.014 Max: 1 00:21:24.014 Min: 1 00:21:24.014 Number of Namespaces: 0 00:21:24.014 Compare Command: Not Supported 00:21:24.014 Write Uncorrectable Command: Not Supported 00:21:24.014 Dataset Management Command: Not Supported 00:21:24.014 Write Zeroes Command: Not Supported 00:21:24.014 Set Features Save Field: Not Supported 00:21:24.014 Reservations: Not Supported 00:21:24.014 Timestamp: Not Supported 00:21:24.014 Copy: Not Supported 00:21:24.014 Volatile Write Cache: Not Present 00:21:24.014 Atomic Write Unit (Normal): 1 00:21:24.014 Atomic Write Unit (PFail): 1 00:21:24.014 Atomic Compare & Write Unit: 1 00:21:24.014 Fused Compare & Write: Supported 00:21:24.014 Scatter-Gather List 00:21:24.014 SGL Command Set: Supported 00:21:24.014 SGL Keyed: Supported 00:21:24.014 SGL Bit Bucket Descriptor: Not Supported 00:21:24.014 SGL Metadata Pointer: Not Supported 00:21:24.014 Oversized SGL: Not Supported 00:21:24.014 SGL Metadata Address: Not Supported 00:21:24.014 SGL Offset: Supported 00:21:24.014 Transport SGL Data Block: Not Supported 00:21:24.014 Replay Protected Memory Block: Not Supported 00:21:24.014 00:21:24.014 Firmware Slot Information 00:21:24.014 ========================= 00:21:24.014 Active slot: 0 00:21:24.014 00:21:24.014 00:21:24.014 Error Log 00:21:24.014 ========= 00:21:24.014 00:21:24.014 Active Namespaces 00:21:24.014 ================= 00:21:24.014 Discovery Log Page 00:21:24.014 ================== 00:21:24.014 Generation Counter: 2 00:21:24.014 Number of Records: 2 00:21:24.014 Record Format: 0 00:21:24.014 00:21:24.014 Discovery Log Entry 0 00:21:24.014 ---------------------- 00:21:24.014 Transport Type: 3 (TCP) 00:21:24.014 Address Family: 1 (IPv4) 00:21:24.014 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:24.014 Entry Flags: 00:21:24.014 Duplicate Returned Information: 1 00:21:24.014 Explicit Persistent Connection Support for Discovery: 1 00:21:24.014 Transport Requirements: 00:21:24.014 Secure Channel: Not Required 00:21:24.014 Port ID: 0 (0x0000) 00:21:24.014 Controller ID: 65535 (0xffff) 00:21:24.014 Admin Max SQ Size: 128 00:21:24.014 Transport Service Identifier: 4420 00:21:24.014 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:24.014 Transport Address: 10.0.0.2 00:21:24.014 Discovery Log Entry 1 00:21:24.014 ---------------------- 00:21:24.014 Transport Type: 3 (TCP) 00:21:24.014 Address Family: 1 (IPv4) 00:21:24.014 Subsystem Type: 2 (NVM Subsystem) 00:21:24.014 Entry Flags: 00:21:24.014 Duplicate Returned Information: 0 00:21:24.014 Explicit Persistent Connection Support for Discovery: 0 00:21:24.014 Transport Requirements: 00:21:24.014 Secure Channel: Not Required 00:21:24.014 Port ID: 0 (0x0000) 00:21:24.014 Controller ID: 65535 (0xffff) 00:21:24.014 Admin Max SQ Size: 128 00:21:24.014 Transport Service Identifier: 4420 00:21:24.014 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:24.014 Transport Address: 10.0.0.2 [2024-12-13 09:32:36.145678] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:24.014 [2024-12-13 09:32:36.145689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1100) on tqpair=0x164f690 00:21:24.014 [2024-12-13 09:32:36.145696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.014 [2024-12-13 09:32:36.145700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1280) on tqpair=0x164f690 00:21:24.014 [2024-12-13 09:32:36.145704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.014 [2024-12-13 09:32:36.145709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1400) on tqpair=0x164f690 00:21:24.015 [2024-12-13 09:32:36.145712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.015 [2024-12-13 09:32:36.145716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1580) on tqpair=0x164f690 00:21:24.015 [2024-12-13 09:32:36.145720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.015 [2024-12-13 09:32:36.145728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.145731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.145734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164f690) 00:21:24.015 [2024-12-13 09:32:36.145742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.015 [2024-12-13 09:32:36.145755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1580, cid 3, qid 0 00:21:24.015 [2024-12-13 09:32:36.145824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.015 [2024-12-13 09:32:36.145830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.015 [2024-12-13 09:32:36.145833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.145836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1580) on tqpair=0x164f690 00:21:24.015 [2024-12-13 09:32:36.145842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.145845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.145848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164f690) 00:21:24.015 [2024-12-13 09:32:36.145854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.015 [2024-12-13 09:32:36.145865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1580, cid 3, qid 0 00:21:24.015 [2024-12-13 09:32:36.145973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.015 [2024-12-13 09:32:36.145978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.015 [2024-12-13 09:32:36.145981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.145985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1580) on tqpair=0x164f690 00:21:24.015 [2024-12-13 09:32:36.145989] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:24.015 [2024-12-13 09:32:36.145993] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:24.015 [2024-12-13 09:32:36.146002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164f690) 00:21:24.015 [2024-12-13 09:32:36.146014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.015 [2024-12-13 09:32:36.146024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1580, cid 3, qid 0 00:21:24.015 [2024-12-13 09:32:36.146090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.015 [2024-12-13 09:32:36.146096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.015 [2024-12-13 09:32:36.146099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1580) on tqpair=0x164f690 00:21:24.015 [2024-12-13 09:32:36.146110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164f690) 00:21:24.015 [2024-12-13 09:32:36.146122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.015 [2024-12-13 09:32:36.146131] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1580, cid 3, qid 0 00:21:24.015 [2024-12-13 09:32:36.146224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.015 [2024-12-13 09:32:36.146230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.015 [2024-12-13 09:32:36.146232] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1580) on tqpair=0x164f690 00:21:24.015 [2024-12-13 09:32:36.146244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164f690) 00:21:24.015 [2024-12-13 09:32:36.146256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.015 [2024-12-13 09:32:36.146266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1580, cid 3, qid 0 00:21:24.015 [2024-12-13 09:32:36.146376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.015 [2024-12-13 09:32:36.146381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.015 [2024-12-13 09:32:36.146384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1580) on tqpair=0x164f690 00:21:24.015 [2024-12-13 09:32:36.146395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164f690) 00:21:24.015 [2024-12-13 09:32:36.146407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.015 [2024-12-13 09:32:36.146417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1580, cid 3, qid 0 00:21:24.015 [2024-12-13 09:32:36.146528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.015 [2024-12-13 09:32:36.146534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.015 [2024-12-13 09:32:36.146537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1580) on tqpair=0x164f690 00:21:24.015 [2024-12-13 09:32:36.146548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164f690) 00:21:24.015 [2024-12-13 09:32:36.146562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.015 [2024-12-13 09:32:36.146572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1580, cid 3, qid 0 00:21:24.015 [2024-12-13 09:32:36.146645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.015 [2024-12-13 09:32:36.146650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.015 [2024-12-13 09:32:36.146653] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1580) on tqpair=0x164f690 00:21:24.015 [2024-12-13 09:32:36.146665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164f690) 00:21:24.015 [2024-12-13 09:32:36.146677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.015 [2024-12-13 09:32:36.146686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1580, cid 3, qid 0 00:21:24.015 [2024-12-13 09:32:36.146780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.015 [2024-12-13 09:32:36.146786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.015 [2024-12-13 09:32:36.146789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1580) on tqpair=0x164f690 00:21:24.015 [2024-12-13 09:32:36.146800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164f690) 00:21:24.015 [2024-12-13 09:32:36.146812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.015 [2024-12-13 09:32:36.146821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1580, cid 3, qid 0 00:21:24.015 [2024-12-13 09:32:36.146930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.015 [2024-12-13 09:32:36.146936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.015 [2024-12-13 09:32:36.146938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1580) on tqpair=0x164f690 00:21:24.015 [2024-12-13 09:32:36.146949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.146956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164f690) 00:21:24.015 [2024-12-13 09:32:36.146961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.015 [2024-12-13 09:32:36.146970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1580, cid 3, qid 0 00:21:24.015 [2024-12-13 09:32:36.147032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.015 [2024-12-13 09:32:36.147037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.015 [2024-12-13 09:32:36.147040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.147043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1580) on tqpair=0x164f690 00:21:24.015 [2024-12-13 09:32:36.147051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.147055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.147071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164f690) 00:21:24.015 [2024-12-13 09:32:36.147077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.015 [2024-12-13 09:32:36.147087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1580, cid 3, qid 0 00:21:24.015 [2024-12-13 09:32:36.147145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.015 [2024-12-13 09:32:36.147150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.015 [2024-12-13 09:32:36.147153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.147156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1580) on tqpair=0x164f690 00:21:24.015 [2024-12-13 09:32:36.147164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.147168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.015 [2024-12-13 09:32:36.147171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164f690) 00:21:24.015 [2024-12-13 09:32:36.147176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.015 [2024-12-13 09:32:36.147185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1580, cid 3, qid 0 00:21:24.016 [2024-12-13 09:32:36.147297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.016 [2024-12-13 09:32:36.147302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.016 [2024-12-13 09:32:36.147305] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.147308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1580) on tqpair=0x164f690 00:21:24.016 [2024-12-13 09:32:36.147317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.147320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.147323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164f690) 00:21:24.016 [2024-12-13 09:32:36.147329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.016 [2024-12-13 09:32:36.147338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1580, cid 3, qid 0 00:21:24.016 [2024-12-13 09:32:36.147435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.016 [2024-12-13 09:32:36.147440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.016 [2024-12-13 09:32:36.147443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.147447] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1580) on tqpair=0x164f690 00:21:24.016 [2024-12-13 09:32:36.151460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.151465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.151468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x164f690) 00:21:24.016 [2024-12-13 09:32:36.151474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.016 [2024-12-13 09:32:36.151485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16b1580, cid 3, qid 0 00:21:24.016 [2024-12-13 09:32:36.151620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.016 [2024-12-13 09:32:36.151626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.016 [2024-12-13 09:32:36.151629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.151632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16b1580) on tqpair=0x164f690 00:21:24.016 [2024-12-13 09:32:36.151639] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:21:24.016 00:21:24.016 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:24.016 [2024-12-13 09:32:36.190807] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:21:24.016 [2024-12-13 09:32:36.190855] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403745 ] 00:21:24.016 [2024-12-13 09:32:36.229742] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:24.016 [2024-12-13 09:32:36.229778] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:24.016 [2024-12-13 09:32:36.229782] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:24.016 [2024-12-13 09:32:36.229793] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:24.016 [2024-12-13 09:32:36.229801] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:24.016 [2024-12-13 09:32:36.233643] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:24.016 [2024-12-13 09:32:36.233673] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15b7690 0 00:21:24.016 [2024-12-13 09:32:36.241457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:24.016 [2024-12-13 09:32:36.241470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:24.016 [2024-12-13 09:32:36.241474] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:24.016 [2024-12-13 09:32:36.241478] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:24.016 [2024-12-13 09:32:36.241504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.241509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.241512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b7690) 00:21:24.016 [2024-12-13 09:32:36.241521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:24.016 [2024-12-13 09:32:36.241538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619100, cid 0, qid 0 00:21:24.016 [2024-12-13 09:32:36.248456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.016 [2024-12-13 09:32:36.248463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.016 [2024-12-13 09:32:36.248467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.248471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619100) on tqpair=0x15b7690 00:21:24.016 [2024-12-13 09:32:36.248482] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:24.016 [2024-12-13 09:32:36.248488] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:24.016 [2024-12-13 09:32:36.248493] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:24.016 [2024-12-13 09:32:36.248502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.248506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.248509] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b7690) 00:21:24.016 [2024-12-13 09:32:36.248516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.016 [2024-12-13 09:32:36.248529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619100, cid 0, qid 0 00:21:24.016 [2024-12-13 09:32:36.248683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.016 [2024-12-13 09:32:36.248689] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.016 [2024-12-13 09:32:36.248692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.248695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619100) on tqpair=0x15b7690 00:21:24.016 [2024-12-13 09:32:36.248699] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:24.016 [2024-12-13 09:32:36.248706] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:24.016 [2024-12-13 09:32:36.248712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.248715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.248718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b7690) 00:21:24.016 [2024-12-13 09:32:36.248724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.016 [2024-12-13 09:32:36.248735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619100, cid 0, qid 0 00:21:24.016 [2024-12-13 09:32:36.248798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.016 [2024-12-13 09:32:36.248804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.016 [2024-12-13 09:32:36.248807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.248811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619100) on tqpair=0x15b7690 00:21:24.016 [2024-12-13 09:32:36.248815] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:24.016 [2024-12-13 09:32:36.248821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:24.016 [2024-12-13 09:32:36.248827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.248830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.248833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b7690) 00:21:24.016 [2024-12-13 09:32:36.248839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.016 [2024-12-13 09:32:36.248849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619100, cid 0, qid 0 00:21:24.016 [2024-12-13 09:32:36.248912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.016 [2024-12-13 09:32:36.248918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.016 [2024-12-13 09:32:36.248921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.248924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619100) on tqpair=0x15b7690 00:21:24.016 [2024-12-13 09:32:36.248928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:24.016 [2024-12-13 09:32:36.248936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.248940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.248943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b7690) 00:21:24.016 [2024-12-13 09:32:36.248949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.016 [2024-12-13 09:32:36.248958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619100, cid 0, qid 0 00:21:24.016 [2024-12-13 09:32:36.249022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.016 [2024-12-13 09:32:36.249028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.016 [2024-12-13 09:32:36.249031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.249036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619100) on tqpair=0x15b7690 00:21:24.016 [2024-12-13 09:32:36.249040] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:24.016 [2024-12-13 09:32:36.249044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:24.016 [2024-12-13 09:32:36.249051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:24.016 [2024-12-13 09:32:36.249158] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:24.016 [2024-12-13 09:32:36.249162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:24.016 [2024-12-13 09:32:36.249169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.249172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.016 [2024-12-13 09:32:36.249175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b7690) 00:21:24.016 [2024-12-13 09:32:36.249181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.016 [2024-12-13 09:32:36.249191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619100, cid 0, qid 0 00:21:24.016 [2024-12-13 09:32:36.249266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.017 [2024-12-13 09:32:36.249271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.017 [2024-12-13 09:32:36.249274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.249277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619100) on tqpair=0x15b7690 00:21:24.017 [2024-12-13 09:32:36.249281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:24.017 [2024-12-13 09:32:36.249289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.249293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.249296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b7690) 00:21:24.017 [2024-12-13 09:32:36.249302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.017 [2024-12-13 09:32:36.249312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619100, cid 0, qid 0 00:21:24.017 [2024-12-13 09:32:36.249375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.017 [2024-12-13 09:32:36.249380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.017 [2024-12-13 09:32:36.249383] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.249387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619100) on tqpair=0x15b7690 00:21:24.017 [2024-12-13 09:32:36.249391] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:24.017 [2024-12-13 09:32:36.249395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:24.017 [2024-12-13 09:32:36.249401] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:24.017 [2024-12-13 09:32:36.249411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:24.017 [2024-12-13 09:32:36.249420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.249424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b7690) 00:21:24.017 [2024-12-13 09:32:36.249429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.017 [2024-12-13 09:32:36.249442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619100, cid 0, qid 0 00:21:24.017 [2024-12-13 09:32:36.249546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.017 [2024-12-13 09:32:36.249552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.017 [2024-12-13 09:32:36.249556] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.249559] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b7690): datao=0, datal=4096, cccid=0 00:21:24.017 [2024-12-13 09:32:36.249563] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1619100) on tqpair(0x15b7690): expected_datao=0, payload_size=4096 00:21:24.017 [2024-12-13 09:32:36.249566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.249581] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.249585] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.017 [2024-12-13 09:32:36.292467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.017 [2024-12-13 09:32:36.292470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619100) on tqpair=0x15b7690 00:21:24.017 [2024-12-13 09:32:36.292481] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:24.017 [2024-12-13 09:32:36.292485] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:24.017 [2024-12-13 09:32:36.292489] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:24.017 [2024-12-13 09:32:36.292493] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:24.017 [2024-12-13 09:32:36.292497] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:24.017 [2024-12-13 09:32:36.292501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:24.017 [2024-12-13 09:32:36.292510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:24.017 [2024-12-13 09:32:36.292516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b7690) 00:21:24.017 [2024-12-13 09:32:36.292529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.017 [2024-12-13 09:32:36.292541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619100, cid 0, qid 0 00:21:24.017 [2024-12-13 09:32:36.292623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.017 [2024-12-13 09:32:36.292629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.017 [2024-12-13 09:32:36.292632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619100) on tqpair=0x15b7690 00:21:24.017 [2024-12-13 09:32:36.292641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b7690) 00:21:24.017 [2024-12-13 09:32:36.292652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.017 [2024-12-13 09:32:36.292657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292666] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15b7690) 00:21:24.017 [2024-12-13 09:32:36.292671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.017 [2024-12-13 09:32:36.292676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15b7690) 00:21:24.017 [2024-12-13 09:32:36.292687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.017 [2024-12-13 09:32:36.292692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.017 [2024-12-13 09:32:36.292703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.017 [2024-12-13 09:32:36.292707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:24.017 [2024-12-13 09:32:36.292719] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:24.017 [2024-12-13 09:32:36.292725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b7690) 00:21:24.017 [2024-12-13 09:32:36.292733] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.017 [2024-12-13 09:32:36.292745] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619100, cid 0, qid 0 00:21:24.017 [2024-12-13 09:32:36.292750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619280, cid 1, qid 0 00:21:24.017 [2024-12-13 09:32:36.292754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619400, cid 2, qid 0 00:21:24.017 [2024-12-13 09:32:36.292758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.017 [2024-12-13 09:32:36.292762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619700, cid 4, qid 0 00:21:24.017 [2024-12-13 09:32:36.292861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.017 [2024-12-13 09:32:36.292867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.017 [2024-12-13 09:32:36.292869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619700) on tqpair=0x15b7690 00:21:24.017 [2024-12-13 09:32:36.292877] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:24.017 [2024-12-13 09:32:36.292881] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:24.017 [2024-12-13 09:32:36.292890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:24.017 [2024-12-13 09:32:36.292896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:24.017 [2024-12-13 09:32:36.292901] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292904] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.292907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b7690) 00:21:24.017 [2024-12-13 09:32:36.292914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.017 [2024-12-13 09:32:36.292924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619700, cid 4, qid 0 00:21:24.017 [2024-12-13 09:32:36.292992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.017 [2024-12-13 09:32:36.292997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.017 [2024-12-13 09:32:36.293000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.293003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619700) on tqpair=0x15b7690 00:21:24.017 [2024-12-13 09:32:36.293053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:24.017 [2024-12-13 09:32:36.293063] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:24.017 [2024-12-13 09:32:36.293069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.017 [2024-12-13 09:32:36.293072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b7690) 00:21:24.017 [2024-12-13 09:32:36.293077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.017 [2024-12-13 09:32:36.293088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619700, cid 4, qid 0 00:21:24.017 [2024-12-13 09:32:36.293165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.017 [2024-12-13 09:32:36.293171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.017 [2024-12-13 09:32:36.293174] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.018 [2024-12-13 09:32:36.293177] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b7690): datao=0, datal=4096, cccid=4 00:21:24.018 [2024-12-13 09:32:36.293181] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1619700) on tqpair(0x15b7690): expected_datao=0, payload_size=4096 00:21:24.018 [2024-12-13 09:32:36.293185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.018 [2024-12-13 09:32:36.293198] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.018 [2024-12-13 09:32:36.293202] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.018 [2024-12-13 09:32:36.334590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.018 [2024-12-13 09:32:36.334600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.018 [2024-12-13 09:32:36.334604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.018 [2024-12-13 09:32:36.334607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619700) on tqpair=0x15b7690 00:21:24.018 [2024-12-13 09:32:36.334619] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:24.018 [2024-12-13 09:32:36.334627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:24.018 [2024-12-13 09:32:36.334636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:24.018 [2024-12-13 09:32:36.334642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.018 [2024-12-13 09:32:36.334646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b7690) 00:21:24.018 [2024-12-13 09:32:36.334652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.018 [2024-12-13 09:32:36.334663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619700, cid 4, qid 0 00:21:24.018 [2024-12-13 09:32:36.334754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.018 [2024-12-13 09:32:36.334760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.018 [2024-12-13 09:32:36.334764] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.018 [2024-12-13 09:32:36.334769] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b7690): datao=0, datal=4096, cccid=4 00:21:24.018 [2024-12-13 09:32:36.334773] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1619700) on tqpair(0x15b7690): expected_datao=0, payload_size=4096 00:21:24.018 [2024-12-13 09:32:36.334776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.018 [2024-12-13 09:32:36.334782] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.018 [2024-12-13 09:32:36.334786] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.018 [2024-12-13 09:32:36.334811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.018 [2024-12-13 09:32:36.334817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.018 [2024-12-13 09:32:36.334820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.018 [2024-12-13 09:32:36.334823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619700) on tqpair=0x15b7690 00:21:24.018 [2024-12-13 09:32:36.334832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:24.018 [2024-12-13 09:32:36.334841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:24.018 [2024-12-13 09:32:36.334847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.018 [2024-12-13 09:32:36.334851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b7690) 00:21:24.018 [2024-12-13 09:32:36.334856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.018 [2024-12-13 09:32:36.334866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619700, cid 4, qid 0 00:21:24.018 [2024-12-13 09:32:36.334944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.018 [2024-12-13 09:32:36.334950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.018 [2024-12-13 09:32:36.334953] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.018 [2024-12-13 09:32:36.334956] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b7690): datao=0, datal=4096, cccid=4 00:21:24.018 [2024-12-13 09:32:36.334960] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1619700) on tqpair(0x15b7690): expected_datao=0, payload_size=4096 00:21:24.018 [2024-12-13 09:32:36.334964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.018 [2024-12-13 09:32:36.334976] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.018 [2024-12-13 09:32:36.334980] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.279 [2024-12-13 09:32:36.380459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.279 [2024-12-13 09:32:36.380470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.279 [2024-12-13 09:32:36.380473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.279 [2024-12-13 09:32:36.380477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619700) on tqpair=0x15b7690 00:21:24.279 [2024-12-13 09:32:36.380489] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:24.279 [2024-12-13 09:32:36.380497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:24.279 [2024-12-13 09:32:36.380504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:24.279 [2024-12-13 09:32:36.380509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:24.279 [2024-12-13 09:32:36.380513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:24.279 [2024-12-13 09:32:36.380518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:24.279 [2024-12-13 09:32:36.380525] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:24.279 [2024-12-13 09:32:36.380529] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:24.279 [2024-12-13 09:32:36.380533] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:24.279 [2024-12-13 09:32:36.380547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.279 [2024-12-13 09:32:36.380551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b7690) 00:21:24.279 [2024-12-13 09:32:36.380557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.279 [2024-12-13 09:32:36.380563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.279 [2024-12-13 09:32:36.380567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.279 [2024-12-13 09:32:36.380569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15b7690) 00:21:24.279 [2024-12-13 09:32:36.380575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.279 [2024-12-13 09:32:36.380589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619700, cid 4, qid 0 00:21:24.279 [2024-12-13 09:32:36.380594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619880, cid 5, qid 0 00:21:24.279 [2024-12-13 09:32:36.380678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.279 [2024-12-13 09:32:36.380684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.279 [2024-12-13 09:32:36.380687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.279 [2024-12-13 09:32:36.380690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619700) on tqpair=0x15b7690 00:21:24.279 [2024-12-13 09:32:36.380695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.279 [2024-12-13 09:32:36.380700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.279 [2024-12-13 09:32:36.380703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.279 [2024-12-13 09:32:36.380707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619880) on tqpair=0x15b7690 00:21:24.279 [2024-12-13 09:32:36.380714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.279 [2024-12-13 09:32:36.380718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15b7690) 00:21:24.279 [2024-12-13 09:32:36.380723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.279 [2024-12-13 09:32:36.380733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619880, cid 5, qid 0 00:21:24.279 [2024-12-13 09:32:36.380798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.279 [2024-12-13 09:32:36.380803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.280 [2024-12-13 09:32:36.380806] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.380810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619880) on tqpair=0x15b7690 00:21:24.280 [2024-12-13 09:32:36.380817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.380820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15b7690) 00:21:24.280 [2024-12-13 09:32:36.380826] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.280 [2024-12-13 09:32:36.380835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619880, cid 5, qid 0 00:21:24.280 [2024-12-13 09:32:36.380898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.280 [2024-12-13 09:32:36.380903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.280 [2024-12-13 09:32:36.380908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.380912] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619880) on tqpair=0x15b7690 00:21:24.280 [2024-12-13 09:32:36.380919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.380923] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15b7690) 00:21:24.280 [2024-12-13 09:32:36.380928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.280 [2024-12-13 09:32:36.380938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619880, cid 5, qid 0 00:21:24.280 [2024-12-13 09:32:36.380994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.280 [2024-12-13 09:32:36.381000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.280 [2024-12-13 09:32:36.381003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619880) on tqpair=0x15b7690 00:21:24.280 [2024-12-13 09:32:36.381019] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15b7690) 00:21:24.280 [2024-12-13 09:32:36.381029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.280 [2024-12-13 09:32:36.381035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b7690) 00:21:24.280 [2024-12-13 09:32:36.381043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.280 [2024-12-13 09:32:36.381049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x15b7690) 00:21:24.280 [2024-12-13 09:32:36.381058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.280 [2024-12-13 09:32:36.381064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15b7690) 00:21:24.280 [2024-12-13 09:32:36.381073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.280 [2024-12-13 09:32:36.381084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619880, cid 5, qid 0 00:21:24.280 [2024-12-13 09:32:36.381089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619700, cid 4, qid 0 00:21:24.280 [2024-12-13 09:32:36.381093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619a00, cid 6, qid 0 00:21:24.280 [2024-12-13 09:32:36.381097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619b80, cid 7, qid 0 00:21:24.280 [2024-12-13 09:32:36.381253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.280 [2024-12-13 09:32:36.381259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.280 [2024-12-13 09:32:36.381262] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381265] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b7690): datao=0, datal=8192, cccid=5 00:21:24.280 [2024-12-13 09:32:36.381269] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1619880) on tqpair(0x15b7690): expected_datao=0, payload_size=8192 00:21:24.280 [2024-12-13 09:32:36.381273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381288] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381293] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.280 [2024-12-13 09:32:36.381303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.280 [2024-12-13 09:32:36.381306] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381309] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b7690): datao=0, datal=512, cccid=4 00:21:24.280 [2024-12-13 09:32:36.381313] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1619700) on tqpair(0x15b7690): expected_datao=0, payload_size=512 00:21:24.280 [2024-12-13 09:32:36.381317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381322] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381326] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.280 [2024-12-13 09:32:36.381335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.280 [2024-12-13 09:32:36.381338] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381341] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b7690): datao=0, datal=512, cccid=6 00:21:24.280 [2024-12-13 09:32:36.381345] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1619a00) on tqpair(0x15b7690): expected_datao=0, payload_size=512 00:21:24.280 [2024-12-13 09:32:36.381349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381354] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381357] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.280 [2024-12-13 09:32:36.381367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.280 [2024-12-13 09:32:36.381370] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381373] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b7690): datao=0, datal=4096, cccid=7 00:21:24.280 [2024-12-13 09:32:36.381377] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1619b80) on tqpair(0x15b7690): expected_datao=0, payload_size=4096 00:21:24.280 [2024-12-13 09:32:36.381381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381387] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381390] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.280 [2024-12-13 09:32:36.381402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.280 [2024-12-13 09:32:36.381405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619880) on tqpair=0x15b7690 00:21:24.280 [2024-12-13 09:32:36.381421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.280 [2024-12-13 09:32:36.381426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.280 [2024-12-13 09:32:36.381430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619700) on tqpair=0x15b7690 00:21:24.280 [2024-12-13 09:32:36.381441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.280 [2024-12-13 09:32:36.381446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.280 [2024-12-13 09:32:36.381455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619a00) on tqpair=0x15b7690 00:21:24.280 [2024-12-13 09:32:36.381465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.280 [2024-12-13 09:32:36.381470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.280 [2024-12-13 09:32:36.381474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.280 [2024-12-13 09:32:36.381478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619b80) on tqpair=0x15b7690 00:21:24.280 ===================================================== 00:21:24.280 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:24.280 ===================================================== 00:21:24.280 Controller Capabilities/Features 00:21:24.280 ================================ 00:21:24.280 Vendor ID: 8086 00:21:24.280 Subsystem Vendor ID: 8086 00:21:24.280 Serial Number: SPDK00000000000001 00:21:24.280 Model Number: SPDK bdev Controller 00:21:24.280 Firmware Version: 25.01 00:21:24.280 Recommended Arb Burst: 6 00:21:24.280 IEEE OUI Identifier: e4 d2 5c 00:21:24.280 Multi-path I/O 00:21:24.280 May have multiple subsystem ports: Yes 00:21:24.280 May have multiple controllers: Yes 00:21:24.280 Associated with SR-IOV VF: No 00:21:24.280 Max Data Transfer Size: 131072 00:21:24.280 Max Number of Namespaces: 32 00:21:24.280 Max Number of I/O Queues: 127 00:21:24.280 NVMe Specification Version (VS): 1.3 00:21:24.280 NVMe Specification Version (Identify): 1.3 00:21:24.280 Maximum Queue Entries: 128 00:21:24.280 Contiguous Queues Required: Yes 00:21:24.280 Arbitration Mechanisms Supported 00:21:24.280 Weighted Round Robin: Not Supported 00:21:24.280 Vendor Specific: Not Supported 00:21:24.280 Reset Timeout: 15000 ms 00:21:24.280 Doorbell Stride: 4 bytes 00:21:24.280 NVM Subsystem Reset: Not Supported 00:21:24.280 Command Sets Supported 00:21:24.280 NVM Command Set: Supported 00:21:24.280 Boot Partition: Not Supported 00:21:24.280 Memory Page Size Minimum: 4096 bytes 00:21:24.280 Memory Page Size Maximum: 4096 bytes 00:21:24.280 Persistent Memory Region: Not Supported 00:21:24.280 Optional Asynchronous Events Supported 00:21:24.280 Namespace Attribute Notices: Supported 00:21:24.280 Firmware Activation Notices: Not Supported 00:21:24.280 ANA Change Notices: Not Supported 00:21:24.280 PLE Aggregate Log Change Notices: Not Supported 00:21:24.280 LBA Status Info Alert Notices: Not Supported 00:21:24.280 EGE Aggregate Log Change Notices: Not Supported 00:21:24.280 Normal NVM Subsystem Shutdown event: Not Supported 00:21:24.280 Zone Descriptor Change Notices: Not Supported 00:21:24.280 Discovery Log Change Notices: Not Supported 00:21:24.280 Controller Attributes 00:21:24.280 128-bit Host Identifier: Supported 00:21:24.281 Non-Operational Permissive Mode: Not Supported 00:21:24.281 NVM Sets: Not Supported 00:21:24.281 Read Recovery Levels: Not Supported 00:21:24.281 Endurance Groups: Not Supported 00:21:24.281 Predictable Latency Mode: Not Supported 00:21:24.281 Traffic Based Keep ALive: Not Supported 00:21:24.281 Namespace Granularity: Not Supported 00:21:24.281 SQ Associations: Not Supported 00:21:24.281 UUID List: Not Supported 00:21:24.281 Multi-Domain Subsystem: Not Supported 00:21:24.281 Fixed Capacity Management: Not Supported 00:21:24.281 Variable Capacity Management: Not Supported 00:21:24.281 Delete Endurance Group: Not Supported 00:21:24.281 Delete NVM Set: Not Supported 00:21:24.281 Extended LBA Formats Supported: Not Supported 00:21:24.281 Flexible Data Placement Supported: Not Supported 00:21:24.281 00:21:24.281 Controller Memory Buffer Support 00:21:24.281 ================================ 00:21:24.281 Supported: No 00:21:24.281 00:21:24.281 Persistent Memory Region Support 00:21:24.281 ================================ 00:21:24.281 Supported: No 00:21:24.281 00:21:24.281 Admin Command Set Attributes 00:21:24.281 ============================ 00:21:24.281 Security Send/Receive: Not Supported 00:21:24.281 Format NVM: Not Supported 00:21:24.281 Firmware Activate/Download: Not Supported 00:21:24.281 Namespace Management: Not Supported 00:21:24.281 Device Self-Test: Not Supported 00:21:24.281 Directives: Not Supported 00:21:24.281 NVMe-MI: Not Supported 00:21:24.281 Virtualization Management: Not Supported 00:21:24.281 Doorbell Buffer Config: Not Supported 00:21:24.281 Get LBA Status Capability: Not Supported 00:21:24.281 Command & Feature Lockdown Capability: Not Supported 00:21:24.281 Abort Command Limit: 4 00:21:24.281 Async Event Request Limit: 4 00:21:24.281 Number of Firmware Slots: N/A 00:21:24.281 Firmware Slot 1 Read-Only: N/A 00:21:24.281 Firmware Activation Without Reset: N/A 00:21:24.281 Multiple Update Detection Support: N/A 00:21:24.281 Firmware Update Granularity: No Information Provided 00:21:24.281 Per-Namespace SMART Log: No 00:21:24.281 Asymmetric Namespace Access Log Page: Not Supported 00:21:24.281 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:24.281 Command Effects Log Page: Supported 00:21:24.281 Get Log Page Extended Data: Supported 00:21:24.281 Telemetry Log Pages: Not Supported 00:21:24.281 Persistent Event Log Pages: Not Supported 00:21:24.281 Supported Log Pages Log Page: May Support 00:21:24.281 Commands Supported & Effects Log Page: Not Supported 00:21:24.281 Feature Identifiers & Effects Log Page:May Support 00:21:24.281 NVMe-MI Commands & Effects Log Page: May Support 00:21:24.281 Data Area 4 for Telemetry Log: Not Supported 00:21:24.281 Error Log Page Entries Supported: 128 00:21:24.281 Keep Alive: Supported 00:21:24.281 Keep Alive Granularity: 10000 ms 00:21:24.281 00:21:24.281 NVM Command Set Attributes 00:21:24.281 ========================== 00:21:24.281 Submission Queue Entry Size 00:21:24.281 Max: 64 00:21:24.281 Min: 64 00:21:24.281 Completion Queue Entry Size 00:21:24.281 Max: 16 00:21:24.281 Min: 16 00:21:24.281 Number of Namespaces: 32 00:21:24.281 Compare Command: Supported 00:21:24.281 Write Uncorrectable Command: Not Supported 00:21:24.281 Dataset Management Command: Supported 00:21:24.281 Write Zeroes Command: Supported 00:21:24.281 Set Features Save Field: Not Supported 00:21:24.281 Reservations: Supported 00:21:24.281 Timestamp: Not Supported 00:21:24.281 Copy: Supported 00:21:24.281 Volatile Write Cache: Present 00:21:24.281 Atomic Write Unit (Normal): 1 00:21:24.281 Atomic Write Unit (PFail): 1 00:21:24.281 Atomic Compare & Write Unit: 1 00:21:24.281 Fused Compare & Write: Supported 00:21:24.281 Scatter-Gather List 00:21:24.281 SGL Command Set: Supported 00:21:24.281 SGL Keyed: Supported 00:21:24.281 SGL Bit Bucket Descriptor: Not Supported 00:21:24.281 SGL Metadata Pointer: Not Supported 00:21:24.281 Oversized SGL: Not Supported 00:21:24.281 SGL Metadata Address: Not Supported 00:21:24.281 SGL Offset: Supported 00:21:24.281 Transport SGL Data Block: Not Supported 00:21:24.281 Replay Protected Memory Block: Not Supported 00:21:24.281 00:21:24.281 Firmware Slot Information 00:21:24.281 ========================= 00:21:24.281 Active slot: 1 00:21:24.281 Slot 1 Firmware Revision: 25.01 00:21:24.281 00:21:24.281 00:21:24.281 Commands Supported and Effects 00:21:24.281 ============================== 00:21:24.281 Admin Commands 00:21:24.281 -------------- 00:21:24.281 Get Log Page (02h): Supported 00:21:24.281 Identify (06h): Supported 00:21:24.281 Abort (08h): Supported 00:21:24.281 Set Features (09h): Supported 00:21:24.281 Get Features (0Ah): Supported 00:21:24.281 Asynchronous Event Request (0Ch): Supported 00:21:24.281 Keep Alive (18h): Supported 00:21:24.281 I/O Commands 00:21:24.281 ------------ 00:21:24.281 Flush (00h): Supported LBA-Change 00:21:24.281 Write (01h): Supported LBA-Change 00:21:24.281 Read (02h): Supported 00:21:24.281 Compare (05h): Supported 00:21:24.281 Write Zeroes (08h): Supported LBA-Change 00:21:24.281 Dataset Management (09h): Supported LBA-Change 00:21:24.281 Copy (19h): Supported LBA-Change 00:21:24.281 00:21:24.281 Error Log 00:21:24.281 ========= 00:21:24.281 00:21:24.281 Arbitration 00:21:24.281 =========== 00:21:24.281 Arbitration Burst: 1 00:21:24.281 00:21:24.281 Power Management 00:21:24.281 ================ 00:21:24.281 Number of Power States: 1 00:21:24.281 Current Power State: Power State #0 00:21:24.281 Power State #0: 00:21:24.281 Max Power: 0.00 W 00:21:24.281 Non-Operational State: Operational 00:21:24.281 Entry Latency: Not Reported 00:21:24.281 Exit Latency: Not Reported 00:21:24.281 Relative Read Throughput: 0 00:21:24.281 Relative Read Latency: 0 00:21:24.281 Relative Write Throughput: 0 00:21:24.281 Relative Write Latency: 0 00:21:24.281 Idle Power: Not Reported 00:21:24.281 Active Power: Not Reported 00:21:24.281 Non-Operational Permissive Mode: Not Supported 00:21:24.281 00:21:24.281 Health Information 00:21:24.281 ================== 00:21:24.281 Critical Warnings: 00:21:24.281 Available Spare Space: OK 00:21:24.281 Temperature: OK 00:21:24.281 Device Reliability: OK 00:21:24.281 Read Only: No 00:21:24.281 Volatile Memory Backup: OK 00:21:24.281 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:24.281 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:24.281 Available Spare: 0% 00:21:24.281 Available Spare Threshold: 0% 00:21:24.281 Life Percentage Used:[2024-12-13 09:32:36.381563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.281 [2024-12-13 09:32:36.381567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15b7690) 00:21:24.281 [2024-12-13 09:32:36.381573] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.281 [2024-12-13 09:32:36.381586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619b80, cid 7, qid 0 00:21:24.281 [2024-12-13 09:32:36.381666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.281 [2024-12-13 09:32:36.381671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.281 [2024-12-13 09:32:36.381675] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.281 [2024-12-13 09:32:36.381678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619b80) on tqpair=0x15b7690 00:21:24.281 [2024-12-13 09:32:36.381708] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:24.281 [2024-12-13 09:32:36.381719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619100) on tqpair=0x15b7690 00:21:24.281 [2024-12-13 09:32:36.381725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.281 [2024-12-13 09:32:36.381729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619280) on tqpair=0x15b7690 00:21:24.281 [2024-12-13 09:32:36.381733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.281 [2024-12-13 09:32:36.381738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619400) on tqpair=0x15b7690 00:21:24.281 [2024-12-13 09:32:36.381742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.281 [2024-12-13 09:32:36.381746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.281 [2024-12-13 09:32:36.381751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.281 [2024-12-13 09:32:36.381757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.281 [2024-12-13 09:32:36.381761] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.281 [2024-12-13 09:32:36.381764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.281 [2024-12-13 09:32:36.381770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.281 [2024-12-13 09:32:36.381783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.281 [2024-12-13 09:32:36.381848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.281 [2024-12-13 09:32:36.381854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.281 [2024-12-13 09:32:36.381857] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.281 [2024-12-13 09:32:36.381860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.282 [2024-12-13 09:32:36.381866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.381869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.381873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.282 [2024-12-13 09:32:36.381878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.282 [2024-12-13 09:32:36.381892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.282 [2024-12-13 09:32:36.381965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.282 [2024-12-13 09:32:36.381972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.282 [2024-12-13 09:32:36.381975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.381979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.282 [2024-12-13 09:32:36.381983] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:24.282 [2024-12-13 09:32:36.381987] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:24.282 [2024-12-13 09:32:36.381996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.381999] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382002] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.282 [2024-12-13 09:32:36.382008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.282 [2024-12-13 09:32:36.382018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.282 [2024-12-13 09:32:36.382080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.282 [2024-12-13 09:32:36.382085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.282 [2024-12-13 09:32:36.382088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.282 [2024-12-13 09:32:36.382100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.282 [2024-12-13 09:32:36.382112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.282 [2024-12-13 09:32:36.382122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.282 [2024-12-13 09:32:36.382186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.282 [2024-12-13 09:32:36.382191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.282 [2024-12-13 09:32:36.382194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.282 [2024-12-13 09:32:36.382207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.282 [2024-12-13 09:32:36.382219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.282 [2024-12-13 09:32:36.382229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.282 [2024-12-13 09:32:36.382299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.282 [2024-12-13 09:32:36.382304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.282 [2024-12-13 09:32:36.382307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.282 [2024-12-13 09:32:36.382319] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382323] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.282 [2024-12-13 09:32:36.382332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.282 [2024-12-13 09:32:36.382345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.282 [2024-12-13 09:32:36.382403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.282 [2024-12-13 09:32:36.382409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.282 [2024-12-13 09:32:36.382412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.282 [2024-12-13 09:32:36.382423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.282 [2024-12-13 09:32:36.382436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.282 [2024-12-13 09:32:36.382445] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.282 [2024-12-13 09:32:36.382518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.282 [2024-12-13 09:32:36.382524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.282 [2024-12-13 09:32:36.382527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.282 [2024-12-13 09:32:36.382539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.282 [2024-12-13 09:32:36.382551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.282 [2024-12-13 09:32:36.382561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.282 [2024-12-13 09:32:36.382622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.282 [2024-12-13 09:32:36.382628] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.282 [2024-12-13 09:32:36.382631] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.282 [2024-12-13 09:32:36.382643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.282 [2024-12-13 09:32:36.382655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.282 [2024-12-13 09:32:36.382665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.282 [2024-12-13 09:32:36.382727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.282 [2024-12-13 09:32:36.382733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.282 [2024-12-13 09:32:36.382736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.282 [2024-12-13 09:32:36.382748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.282 [2024-12-13 09:32:36.382760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.282 [2024-12-13 09:32:36.382769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.282 [2024-12-13 09:32:36.382839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.282 [2024-12-13 09:32:36.382844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.282 [2024-12-13 09:32:36.382848] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.282 [2024-12-13 09:32:36.382860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.282 [2024-12-13 09:32:36.382872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.282 [2024-12-13 09:32:36.382882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.282 [2024-12-13 09:32:36.382942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.282 [2024-12-13 09:32:36.382948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.282 [2024-12-13 09:32:36.382951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.282 [2024-12-13 09:32:36.382963] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.382969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.282 [2024-12-13 09:32:36.382975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.282 [2024-12-13 09:32:36.382985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.282 [2024-12-13 09:32:36.383046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.282 [2024-12-13 09:32:36.383052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.282 [2024-12-13 09:32:36.383055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.383058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.282 [2024-12-13 09:32:36.383066] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.383070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.282 [2024-12-13 09:32:36.383073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.282 [2024-12-13 09:32:36.383079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.282 [2024-12-13 09:32:36.383087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.282 [2024-12-13 09:32:36.383151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.282 [2024-12-13 09:32:36.383157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.282 [2024-12-13 09:32:36.383160] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.283 [2024-12-13 09:32:36.383171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.283 [2024-12-13 09:32:36.383184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.283 [2024-12-13 09:32:36.383193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.283 [2024-12-13 09:32:36.383258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.283 [2024-12-13 09:32:36.383265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.283 [2024-12-13 09:32:36.383268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.283 [2024-12-13 09:32:36.383279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.283 [2024-12-13 09:32:36.383291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.283 [2024-12-13 09:32:36.383301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.283 [2024-12-13 09:32:36.383362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.283 [2024-12-13 09:32:36.383367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.283 [2024-12-13 09:32:36.383370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.283 [2024-12-13 09:32:36.383382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.283 [2024-12-13 09:32:36.383394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.283 [2024-12-13 09:32:36.383404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.283 [2024-12-13 09:32:36.383469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.283 [2024-12-13 09:32:36.383476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.283 [2024-12-13 09:32:36.383479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.283 [2024-12-13 09:32:36.383490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.283 [2024-12-13 09:32:36.383503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.283 [2024-12-13 09:32:36.383512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.283 [2024-12-13 09:32:36.383574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.283 [2024-12-13 09:32:36.383580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.283 [2024-12-13 09:32:36.383583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.283 [2024-12-13 09:32:36.383594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.283 [2024-12-13 09:32:36.383607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.283 [2024-12-13 09:32:36.383616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.283 [2024-12-13 09:32:36.383687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.283 [2024-12-13 09:32:36.383692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.283 [2024-12-13 09:32:36.383697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.283 [2024-12-13 09:32:36.383709] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383713] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.283 [2024-12-13 09:32:36.383722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.283 [2024-12-13 09:32:36.383733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.283 [2024-12-13 09:32:36.383792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.283 [2024-12-13 09:32:36.383798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.283 [2024-12-13 09:32:36.383801] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.283 [2024-12-13 09:32:36.383812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.383819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.283 [2024-12-13 09:32:36.383824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.283 [2024-12-13 09:32:36.383833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.283 [2024-12-13 09:32:36.387454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.283 [2024-12-13 09:32:36.387462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.283 [2024-12-13 09:32:36.387465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.387469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.283 [2024-12-13 09:32:36.387479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.387482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.387485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b7690) 00:21:24.283 [2024-12-13 09:32:36.387491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.283 [2024-12-13 09:32:36.387503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1619580, cid 3, qid 0 00:21:24.283 [2024-12-13 09:32:36.387567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.283 [2024-12-13 09:32:36.387574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.283 [2024-12-13 09:32:36.387577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.283 [2024-12-13 09:32:36.387580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1619580) on tqpair=0x15b7690 00:21:24.283 [2024-12-13 09:32:36.387586] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:21:24.283 0% 00:21:24.283 Data Units Read: 0 00:21:24.283 Data Units Written: 0 00:21:24.283 Host Read Commands: 0 00:21:24.283 Host Write Commands: 0 00:21:24.283 Controller Busy Time: 0 minutes 00:21:24.283 Power Cycles: 0 00:21:24.283 Power On Hours: 0 hours 00:21:24.283 Unsafe Shutdowns: 0 00:21:24.283 Unrecoverable Media Errors: 0 00:21:24.283 Lifetime Error Log Entries: 0 00:21:24.283 Warning Temperature Time: 0 minutes 00:21:24.283 Critical Temperature Time: 0 minutes 00:21:24.283 00:21:24.283 Number of Queues 00:21:24.283 ================ 00:21:24.283 Number of I/O Submission Queues: 127 00:21:24.283 Number of I/O Completion Queues: 127 00:21:24.283 00:21:24.283 Active Namespaces 00:21:24.283 ================= 00:21:24.283 Namespace ID:1 00:21:24.283 Error Recovery Timeout: Unlimited 00:21:24.283 Command Set Identifier: NVM (00h) 00:21:24.283 Deallocate: Supported 00:21:24.283 Deallocated/Unwritten Error: Not Supported 00:21:24.283 Deallocated Read Value: Unknown 00:21:24.283 Deallocate in Write Zeroes: Not Supported 00:21:24.283 Deallocated Guard Field: 0xFFFF 00:21:24.283 Flush: Supported 00:21:24.283 Reservation: Supported 00:21:24.283 Namespace Sharing Capabilities: Multiple Controllers 00:21:24.283 Size (in LBAs): 131072 (0GiB) 00:21:24.283 Capacity (in LBAs): 131072 (0GiB) 00:21:24.283 Utilization (in LBAs): 131072 (0GiB) 00:21:24.283 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:24.283 EUI64: ABCDEF0123456789 00:21:24.283 UUID: 4daaa46f-8ce8-44b7-95a0-b15567948903 00:21:24.283 Thin Provisioning: Not Supported 00:21:24.283 Per-NS Atomic Units: Yes 00:21:24.283 Atomic Boundary Size (Normal): 0 00:21:24.283 Atomic Boundary Size (PFail): 0 00:21:24.283 Atomic Boundary Offset: 0 00:21:24.283 Maximum Single Source Range Length: 65535 00:21:24.283 Maximum Copy Length: 65535 00:21:24.283 Maximum Source Range Count: 1 00:21:24.283 NGUID/EUI64 Never Reused: No 00:21:24.283 Namespace Write Protected: No 00:21:24.283 Number of LBA Formats: 1 00:21:24.283 Current LBA Format: LBA Format #00 00:21:24.283 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:24.283 00:21:24.283 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:24.283 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.283 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.283 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.283 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.283 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:24.283 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:24.283 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:24.283 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:24.283 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:24.284 rmmod nvme_tcp 00:21:24.284 rmmod nvme_fabrics 00:21:24.284 rmmod nvme_keyring 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3403513 ']' 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3403513 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3403513 ']' 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3403513 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3403513 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3403513' 00:21:24.284 killing process with pid 3403513 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3403513 00:21:24.284 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3403513 00:21:24.543 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:24.543 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:24.543 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:24.543 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:24.543 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:24.543 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:24.543 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:24.543 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:24.543 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:24.543 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.543 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.543 09:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.446 09:32:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:26.446 00:21:26.446 real 0m9.217s 00:21:26.446 user 0m5.544s 00:21:26.446 sys 0m4.726s 00:21:26.446 09:32:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.446 09:32:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.446 ************************************ 00:21:26.446 END TEST nvmf_identify 00:21:26.446 ************************************ 00:21:26.704 09:32:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:26.704 09:32:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:26.704 09:32:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.704 09:32:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.704 ************************************ 00:21:26.704 START TEST nvmf_perf 00:21:26.704 ************************************ 00:21:26.704 09:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:26.704 * Looking for test storage... 00:21:26.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:26.704 09:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:26.704 09:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:26.704 09:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:26.704 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:26.704 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:26.704 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:26.704 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:26.704 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:26.704 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:26.704 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:26.704 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:26.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.705 --rc genhtml_branch_coverage=1 00:21:26.705 --rc genhtml_function_coverage=1 00:21:26.705 --rc genhtml_legend=1 00:21:26.705 --rc geninfo_all_blocks=1 00:21:26.705 --rc geninfo_unexecuted_blocks=1 00:21:26.705 00:21:26.705 ' 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:26.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.705 --rc genhtml_branch_coverage=1 00:21:26.705 --rc genhtml_function_coverage=1 00:21:26.705 --rc genhtml_legend=1 00:21:26.705 --rc geninfo_all_blocks=1 00:21:26.705 --rc geninfo_unexecuted_blocks=1 00:21:26.705 00:21:26.705 ' 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:26.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.705 --rc genhtml_branch_coverage=1 00:21:26.705 --rc genhtml_function_coverage=1 00:21:26.705 --rc genhtml_legend=1 00:21:26.705 --rc geninfo_all_blocks=1 00:21:26.705 --rc geninfo_unexecuted_blocks=1 00:21:26.705 00:21:26.705 ' 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:26.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.705 --rc genhtml_branch_coverage=1 00:21:26.705 --rc genhtml_function_coverage=1 00:21:26.705 --rc genhtml_legend=1 00:21:26.705 --rc geninfo_all_blocks=1 00:21:26.705 --rc geninfo_unexecuted_blocks=1 00:21:26.705 00:21:26.705 ' 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:26.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:26.705 09:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:33.268 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:33.268 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:33.268 Found net devices under 0000:af:00.0: cvl_0_0 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:33.268 Found net devices under 0000:af:00.1: cvl_0_1 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:33.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:21:33.268 00:21:33.268 --- 10.0.0.2 ping statistics --- 00:21:33.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.268 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:21:33.268 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:21:33.269 00:21:33.269 --- 10.0.0.1 ping statistics --- 00:21:33.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.269 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3407210 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3407210 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3407210 ']' 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.269 09:32:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:33.269 [2024-12-13 09:32:44.846361] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:21:33.269 [2024-12-13 09:32:44.846413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.269 [2024-12-13 09:32:44.913175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.269 [2024-12-13 09:32:44.954587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.269 [2024-12-13 09:32:44.954625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.269 [2024-12-13 09:32:44.954632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.269 [2024-12-13 09:32:44.954639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.269 [2024-12-13 09:32:44.954644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.269 [2024-12-13 09:32:44.955963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.269 [2024-12-13 09:32:44.956059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.269 [2024-12-13 09:32:44.956168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.269 [2024-12-13 09:32:44.956169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.269 09:32:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.269 09:32:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:33.269 09:32:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:33.269 09:32:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:33.269 09:32:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:33.269 09:32:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.269 09:32:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:33.269 09:32:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:35.798 09:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:35.798 09:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:36.056 09:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:21:36.056 09:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:36.314 09:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:36.314 09:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:21:36.314 09:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:36.314 09:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:36.314 09:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:36.572 [2024-12-13 09:32:48.747087] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.572 09:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:36.831 09:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:36.831 09:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:36.831 09:32:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:36.831 09:32:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:37.089 09:32:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:37.347 [2024-12-13 09:32:49.531415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.347 09:32:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:37.605 09:32:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:21:37.605 09:32:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:37.605 09:32:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:37.605 09:32:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:38.982 Initializing NVMe Controllers 00:21:38.982 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:21:38.982 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:21:38.982 Initialization complete. Launching workers. 00:21:38.982 ======================================================== 00:21:38.982 Latency(us) 00:21:38.982 Device Information : IOPS MiB/s Average min max 00:21:38.982 PCIE (0000:5e:00.0) NSID 1 from core 0: 99533.16 388.80 321.03 29.96 8187.10 00:21:38.982 ======================================================== 00:21:38.982 Total : 99533.16 388.80 321.03 29.96 8187.10 00:21:38.982 00:21:38.982 09:32:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:40.356 Initializing NVMe Controllers 00:21:40.356 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:40.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:40.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:40.356 Initialization complete. Launching workers. 00:21:40.356 ======================================================== 00:21:40.356 Latency(us) 00:21:40.356 Device Information : IOPS MiB/s Average min max 00:21:40.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 148.00 0.58 6844.28 106.30 45681.46 00:21:40.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 36.00 0.14 27882.41 7217.58 47908.84 00:21:40.356 ======================================================== 00:21:40.356 Total : 184.00 0.72 10960.43 106.30 47908.84 00:21:40.356 00:21:40.356 09:32:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:41.732 Initializing NVMe Controllers 00:21:41.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:41.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:41.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:41.732 Initialization complete. Launching workers. 00:21:41.732 ======================================================== 00:21:41.732 Latency(us) 00:21:41.732 Device Information : IOPS MiB/s Average min max 00:21:41.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11279.11 44.06 2846.39 399.40 6338.74 00:21:41.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3794.70 14.82 8467.95 5212.10 16067.96 00:21:41.732 ======================================================== 00:21:41.732 Total : 15073.81 58.88 4261.57 399.40 16067.96 00:21:41.732 00:21:41.732 09:32:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:41.732 09:32:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:41.732 09:32:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:44.264 Initializing NVMe Controllers 00:21:44.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:44.264 Controller IO queue size 128, less than required. 00:21:44.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:44.264 Controller IO queue size 128, less than required. 00:21:44.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:44.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:44.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:44.264 Initialization complete. Launching workers. 00:21:44.264 ======================================================== 00:21:44.264 Latency(us) 00:21:44.264 Device Information : IOPS MiB/s Average min max 00:21:44.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1834.01 458.50 70718.18 50904.78 111842.54 00:21:44.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 601.34 150.34 222046.86 102761.33 356172.58 00:21:44.264 ======================================================== 00:21:44.264 Total : 2435.35 608.84 108084.43 50904.78 356172.58 00:21:44.264 00:21:44.264 09:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:44.522 No valid NVMe controllers or AIO or URING devices found 00:21:44.522 Initializing NVMe Controllers 00:21:44.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:44.522 Controller IO queue size 128, less than required. 00:21:44.522 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:44.522 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:44.522 Controller IO queue size 128, less than required. 00:21:44.522 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:44.522 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:44.522 WARNING: Some requested NVMe devices were skipped 00:21:44.522 09:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:47.054 Initializing NVMe Controllers 00:21:47.054 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:47.054 Controller IO queue size 128, less than required. 00:21:47.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.055 Controller IO queue size 128, less than required. 00:21:47.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:47.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:47.055 Initialization complete. Launching workers. 00:21:47.055 00:21:47.055 ==================== 00:21:47.055 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:47.055 TCP transport: 00:21:47.055 polls: 16364 00:21:47.055 idle_polls: 12356 00:21:47.055 sock_completions: 4008 00:21:47.055 nvme_completions: 6247 00:21:47.055 submitted_requests: 9356 00:21:47.055 queued_requests: 1 00:21:47.055 00:21:47.055 ==================== 00:21:47.055 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:47.055 TCP transport: 00:21:47.055 polls: 16212 00:21:47.055 idle_polls: 11955 00:21:47.055 sock_completions: 4257 00:21:47.055 nvme_completions: 6347 00:21:47.055 submitted_requests: 9528 00:21:47.055 queued_requests: 1 00:21:47.055 ======================================================== 00:21:47.055 Latency(us) 00:21:47.055 Device Information : IOPS MiB/s Average min max 00:21:47.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1560.61 390.15 84852.78 55586.08 145375.28 00:21:47.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1585.60 396.40 81024.75 46807.97 133072.48 00:21:47.055 ======================================================== 00:21:47.055 Total : 3146.21 786.55 82923.57 46807.97 145375.28 00:21:47.055 00:21:47.055 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:47.055 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:47.313 rmmod nvme_tcp 00:21:47.313 rmmod nvme_fabrics 00:21:47.313 rmmod nvme_keyring 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3407210 ']' 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3407210 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3407210 ']' 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3407210 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.313 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3407210 00:21:47.572 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:47.572 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:47.572 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3407210' 00:21:47.572 killing process with pid 3407210 00:21:47.572 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3407210 00:21:47.572 09:32:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3407210 00:21:48.947 09:33:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:48.947 09:33:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:48.947 09:33:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:48.947 09:33:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:48.947 09:33:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:48.947 09:33:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:48.947 09:33:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:48.947 09:33:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:48.947 09:33:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:48.947 09:33:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.947 09:33:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.947 09:33:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:51.481 00:21:51.481 real 0m24.373s 00:21:51.481 user 1m4.301s 00:21:51.481 sys 0m8.154s 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:51.481 ************************************ 00:21:51.481 END TEST nvmf_perf 00:21:51.481 ************************************ 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.481 ************************************ 00:21:51.481 START TEST nvmf_fio_host 00:21:51.481 ************************************ 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:51.481 * Looking for test storage... 00:21:51.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:51.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.481 --rc genhtml_branch_coverage=1 00:21:51.481 --rc genhtml_function_coverage=1 00:21:51.481 --rc genhtml_legend=1 00:21:51.481 --rc geninfo_all_blocks=1 00:21:51.481 --rc geninfo_unexecuted_blocks=1 00:21:51.481 00:21:51.481 ' 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:51.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.481 --rc genhtml_branch_coverage=1 00:21:51.481 --rc genhtml_function_coverage=1 00:21:51.481 --rc genhtml_legend=1 00:21:51.481 --rc geninfo_all_blocks=1 00:21:51.481 --rc geninfo_unexecuted_blocks=1 00:21:51.481 00:21:51.481 ' 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:51.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.481 --rc genhtml_branch_coverage=1 00:21:51.481 --rc genhtml_function_coverage=1 00:21:51.481 --rc genhtml_legend=1 00:21:51.481 --rc geninfo_all_blocks=1 00:21:51.481 --rc geninfo_unexecuted_blocks=1 00:21:51.481 00:21:51.481 ' 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:51.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.481 --rc genhtml_branch_coverage=1 00:21:51.481 --rc genhtml_function_coverage=1 00:21:51.481 --rc genhtml_legend=1 00:21:51.481 --rc geninfo_all_blocks=1 00:21:51.481 --rc geninfo_unexecuted_blocks=1 00:21:51.481 00:21:51.481 ' 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.481 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:51.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:21:51.482 09:33:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.754 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.754 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:21:56.754 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:56.754 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:56.754 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:56.754 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:56.754 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:56.754 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:21:56.754 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:56.754 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:21:56.754 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:21:56.754 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:56.755 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:56.755 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:56.755 Found net devices under 0000:af:00.0: cvl_0_0 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:56.755 Found net devices under 0000:af:00.1: cvl_0_1 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:56.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:21:56.755 00:21:56.755 --- 10.0.0.2 ping statistics --- 00:21:56.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.755 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:21:56.755 00:21:56.755 --- 10.0.0.1 ping statistics --- 00:21:56.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.755 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3413308 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3413308 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3413308 ']' 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.755 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.756 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.756 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.756 09:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.756 [2024-12-13 09:33:08.876468] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:21:56.756 [2024-12-13 09:33:08.876529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.756 [2024-12-13 09:33:08.942464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:56.756 [2024-12-13 09:33:08.983986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.756 [2024-12-13 09:33:08.984025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.756 [2024-12-13 09:33:08.984032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.756 [2024-12-13 09:33:08.984038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.756 [2024-12-13 09:33:08.984043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.756 [2024-12-13 09:33:08.985501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.756 [2024-12-13 09:33:08.985524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.756 [2024-12-13 09:33:08.985592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:56.756 [2024-12-13 09:33:08.985593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.756 09:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.756 09:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:56.756 09:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:57.014 [2024-12-13 09:33:09.263018] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.014 09:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:57.014 09:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:57.014 09:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.014 09:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:57.272 Malloc1 00:21:57.272 09:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:57.530 09:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:57.788 09:33:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.047 [2024-12-13 09:33:10.161138] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.047 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:58.047 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:58.047 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:58.047 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:58.047 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:58.047 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:58.047 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:58.047 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:58.047 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:58.047 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:58.047 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:58.047 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:58.047 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:58.047 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:58.317 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:58.317 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:58.317 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:58.317 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:58.317 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:58.317 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:58.317 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:58.317 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:58.317 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:58.317 09:33:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:58.574 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:58.574 fio-3.35 00:21:58.574 Starting 1 thread 00:22:01.100 00:22:01.100 test: (groupid=0, jobs=1): err= 0: pid=3414271: Fri Dec 13 09:33:13 2024 00:22:01.100 read: IOPS=11.9k, BW=46.3MiB/s (48.6MB/s)(92.9MiB/2005msec) 00:22:01.100 slat (nsec): min=1533, max=239953, avg=1693.88, stdev=2186.51 00:22:01.100 clat (usec): min=3089, max=10381, avg=5966.22, stdev=453.14 00:22:01.100 lat (usec): min=3120, max=10382, avg=5967.92, stdev=453.06 00:22:01.100 clat percentiles (usec): 00:22:01.100 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5604], 00:22:01.100 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6063], 00:22:01.100 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:22:01.100 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8717], 99.95th=[ 9896], 00:22:01.100 | 99.99th=[10421] 00:22:01.100 bw ( KiB/s): min=46400, max=48160, per=99.96%, avg=47426.00, stdev=745.86, samples=4 00:22:01.100 iops : min=11600, max=12040, avg=11856.50, stdev=186.46, samples=4 00:22:01.100 write: IOPS=11.8k, BW=46.1MiB/s (48.4MB/s)(92.5MiB/2005msec); 0 zone resets 00:22:01.100 slat (nsec): min=1572, max=224506, avg=1770.32, stdev=1637.62 00:22:01.100 clat (usec): min=2436, max=9389, avg=4816.46, stdev=369.28 00:22:01.100 lat (usec): min=2450, max=9390, avg=4818.23, stdev=369.28 00:22:01.100 clat percentiles (usec): 00:22:01.100 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:22:01.100 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:22:01.100 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5276], 95.00th=[ 5407], 00:22:01.100 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 6980], 99.95th=[ 8586], 00:22:01.100 | 99.99th=[ 9372] 00:22:01.100 bw ( KiB/s): min=46976, max=47744, per=100.00%, avg=47232.00, stdev=350.54, samples=4 00:22:01.100 iops : min=11744, max=11936, avg=11808.00, stdev=87.64, samples=4 00:22:01.100 lat (msec) : 4=0.61%, 10=99.37%, 20=0.01% 00:22:01.100 cpu : usr=72.26%, sys=26.50%, ctx=105, majf=0, minf=2 00:22:01.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:01.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:01.101 issued rwts: total=23782,23674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.101 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:01.101 00:22:01.101 Run status group 0 (all jobs): 00:22:01.101 READ: bw=46.3MiB/s (48.6MB/s), 46.3MiB/s-46.3MiB/s (48.6MB/s-48.6MB/s), io=92.9MiB (97.4MB), run=2005-2005msec 00:22:01.101 WRITE: bw=46.1MiB/s (48.4MB/s), 46.1MiB/s-46.1MiB/s (48.4MB/s-48.4MB/s), io=92.5MiB (97.0MB), run=2005-2005msec 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:01.101 09:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:01.101 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:01.101 fio-3.35 00:22:01.101 Starting 1 thread 00:22:02.994 [2024-12-13 09:33:14.879990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139a810 is same with the state(6) to be set 00:22:02.994 [2024-12-13 09:33:14.880047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139a810 is same with the state(6) to be set 00:22:02.994 [2024-12-13 09:33:14.880057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139a810 is same with the state(6) to be set 00:22:02.994 [2024-12-13 09:33:14.880064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139a810 is same with the state(6) to be set 00:22:03.557 [2024-12-13 09:33:15.875316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b650 is same with the state(6) to be set 00:22:03.557 00:22:03.557 test: (groupid=0, jobs=1): err= 0: pid=3414836: Fri Dec 13 09:33:15 2024 00:22:03.557 read: IOPS=11.0k, BW=172MiB/s (181MB/s)(346MiB/2006msec) 00:22:03.557 slat (nsec): min=2496, max=92527, avg=2791.98, stdev=1270.54 00:22:03.557 clat (usec): min=1974, max=12873, avg=6676.63, stdev=1534.25 00:22:03.557 lat (usec): min=1976, max=12876, avg=6679.43, stdev=1534.37 00:22:03.558 clat percentiles (usec): 00:22:03.558 | 1.00th=[ 3654], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5342], 00:22:03.558 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7111], 00:22:03.558 | 70.00th=[ 7504], 80.00th=[ 7963], 90.00th=[ 8586], 95.00th=[ 9110], 00:22:03.558 | 99.00th=[10814], 99.50th=[11469], 99.90th=[12387], 99.95th=[12780], 00:22:03.558 | 99.99th=[12911] 00:22:03.558 bw ( KiB/s): min=85184, max=94080, per=51.22%, avg=90448.00, stdev=4066.35, samples=4 00:22:03.558 iops : min= 5324, max= 5880, avg=5653.00, stdev=254.15, samples=4 00:22:03.558 write: IOPS=6553, BW=102MiB/s (107MB/s)(185MiB/1804msec); 0 zone resets 00:22:03.558 slat (usec): min=29, max=346, avg=31.09, stdev= 6.63 00:22:03.558 clat (usec): min=3034, max=15899, avg=8563.38, stdev=1468.06 00:22:03.558 lat (usec): min=3067, max=15933, avg=8594.48, stdev=1469.12 00:22:03.558 clat percentiles (usec): 00:22:03.558 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7308], 00:22:03.558 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:22:03.558 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11207], 00:22:03.558 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13698], 99.95th=[14091], 00:22:03.558 | 99.99th=[15270] 00:22:03.558 bw ( KiB/s): min=87680, max=98304, per=89.77%, avg=94120.00, stdev=4822.94, samples=4 00:22:03.558 iops : min= 5480, max= 6144, avg=5882.50, stdev=301.43, samples=4 00:22:03.558 lat (msec) : 2=0.01%, 4=1.84%, 10=90.76%, 20=7.39% 00:22:03.558 cpu : usr=85.54%, sys=13.72%, ctx=39, majf=0, minf=2 00:22:03.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:03.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:03.558 issued rwts: total=22139,11822,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:03.558 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:03.558 00:22:03.558 Run status group 0 (all jobs): 00:22:03.558 READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=346MiB (363MB), run=2006-2006msec 00:22:03.558 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=185MiB (194MB), run=1804-1804msec 00:22:03.558 09:33:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:03.814 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:03.814 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:03.814 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:03.814 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:03.815 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:03.815 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:03.815 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:03.815 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:03.815 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:03.815 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:03.815 rmmod nvme_tcp 00:22:03.815 rmmod nvme_fabrics 00:22:03.815 rmmod nvme_keyring 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3413308 ']' 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3413308 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3413308 ']' 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3413308 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3413308 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3413308' 00:22:04.073 killing process with pid 3413308 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3413308 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3413308 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:04.073 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:04.332 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:04.332 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:04.332 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.332 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.332 09:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.233 09:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:06.233 00:22:06.233 real 0m15.213s 00:22:06.233 user 0m46.373s 00:22:06.233 sys 0m6.062s 00:22:06.233 09:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.233 09:33:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.233 ************************************ 00:22:06.233 END TEST nvmf_fio_host 00:22:06.233 ************************************ 00:22:06.233 09:33:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:06.233 09:33:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:06.233 09:33:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.233 09:33:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.233 ************************************ 00:22:06.233 START TEST nvmf_failover 00:22:06.233 ************************************ 00:22:06.233 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:06.492 * Looking for test storage... 00:22:06.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.492 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:06.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.493 --rc genhtml_branch_coverage=1 00:22:06.493 --rc genhtml_function_coverage=1 00:22:06.493 --rc genhtml_legend=1 00:22:06.493 --rc geninfo_all_blocks=1 00:22:06.493 --rc geninfo_unexecuted_blocks=1 00:22:06.493 00:22:06.493 ' 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:06.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.493 --rc genhtml_branch_coverage=1 00:22:06.493 --rc genhtml_function_coverage=1 00:22:06.493 --rc genhtml_legend=1 00:22:06.493 --rc geninfo_all_blocks=1 00:22:06.493 --rc geninfo_unexecuted_blocks=1 00:22:06.493 00:22:06.493 ' 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:06.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.493 --rc genhtml_branch_coverage=1 00:22:06.493 --rc genhtml_function_coverage=1 00:22:06.493 --rc genhtml_legend=1 00:22:06.493 --rc geninfo_all_blocks=1 00:22:06.493 --rc geninfo_unexecuted_blocks=1 00:22:06.493 00:22:06.493 ' 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:06.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.493 --rc genhtml_branch_coverage=1 00:22:06.493 --rc genhtml_function_coverage=1 00:22:06.493 --rc genhtml_legend=1 00:22:06.493 --rc geninfo_all_blocks=1 00:22:06.493 --rc geninfo_unexecuted_blocks=1 00:22:06.493 00:22:06.493 ' 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:06.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:06.493 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:06.494 09:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:11.758 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:11.758 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:11.758 Found net devices under 0000:af:00.0: cvl_0_0 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.758 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:11.759 Found net devices under 0000:af:00.1: cvl_0_1 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.759 09:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.759 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.759 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.759 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:11.759 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.759 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:12.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:22:12.020 00:22:12.020 --- 10.0.0.2 ping statistics --- 00:22:12.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.020 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:22:12.020 00:22:12.020 --- 10.0.0.1 ping statistics --- 00:22:12.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.020 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3418526 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3418526 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3418526 ']' 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.020 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:12.020 [2024-12-13 09:33:24.248208] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:22:12.020 [2024-12-13 09:33:24.248258] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.020 [2024-12-13 09:33:24.316202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:12.020 [2024-12-13 09:33:24.356568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.020 [2024-12-13 09:33:24.356602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.020 [2024-12-13 09:33:24.356609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.020 [2024-12-13 09:33:24.356615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.020 [2024-12-13 09:33:24.356620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.020 [2024-12-13 09:33:24.357828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.020 [2024-12-13 09:33:24.357913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:12.020 [2024-12-13 09:33:24.357914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.395 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.395 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:12.395 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.395 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.395 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:12.395 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.395 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:12.395 [2024-12-13 09:33:24.667911] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.395 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:12.701 Malloc0 00:22:12.701 09:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.959 09:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:13.217 09:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.217 [2024-12-13 09:33:25.499212] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.218 09:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:13.476 [2024-12-13 09:33:25.699771] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:13.476 09:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:13.734 [2024-12-13 09:33:25.892391] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:13.734 09:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3418800 00:22:13.734 09:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:13.734 09:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:13.734 09:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3418800 /var/tmp/bdevperf.sock 00:22:13.734 09:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3418800 ']' 00:22:13.734 09:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.734 09:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.734 09:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.734 09:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.734 09:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:13.993 09:33:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.993 09:33:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:13.993 09:33:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:14.251 NVMe0n1 00:22:14.251 09:33:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:14.508 00:22:14.508 09:33:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:14.508 09:33:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3419014 00:22:14.508 09:33:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:15.883 09:33:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.884 [2024-12-13 09:33:28.038536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.038995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.039001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.039008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.039014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.884 [2024-12-13 09:33:28.039021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 [2024-12-13 09:33:28.039119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113560 is same with the state(6) to be set 00:22:15.885 09:33:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:19.167 09:33:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:19.167 00:22:19.424 09:33:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:19.424 09:33:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:22.706 09:33:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.706 [2024-12-13 09:33:34.939521] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.706 09:33:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:23.637 09:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:23.895 [2024-12-13 09:33:36.156098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.895 [2024-12-13 09:33:36.156404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.896 [2024-12-13 09:33:36.156410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.896 [2024-12-13 09:33:36.156416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.896 [2024-12-13 09:33:36.156422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.896 [2024-12-13 09:33:36.156428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.896 [2024-12-13 09:33:36.156434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.896 [2024-12-13 09:33:36.156440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.896 [2024-12-13 09:33:36.156446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.896 [2024-12-13 09:33:36.156458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.896 [2024-12-13 09:33:36.156469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.896 [2024-12-13 09:33:36.156475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.896 [2024-12-13 09:33:36.156481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.896 [2024-12-13 09:33:36.156487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.896 [2024-12-13 09:33:36.156493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.896 [2024-12-13 09:33:36.156499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260710 is same with the state(6) to be set 00:22:23.896 09:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3419014 00:22:30.455 { 00:22:30.455 "results": [ 00:22:30.455 { 00:22:30.455 "job": "NVMe0n1", 00:22:30.455 "core_mask": "0x1", 00:22:30.455 "workload": "verify", 00:22:30.455 "status": "finished", 00:22:30.455 "verify_range": { 00:22:30.455 "start": 0, 00:22:30.455 "length": 16384 00:22:30.455 }, 00:22:30.455 "queue_depth": 128, 00:22:30.455 "io_size": 4096, 00:22:30.455 "runtime": 15.003426, 00:22:30.455 "iops": 10902.376563859481, 00:22:30.455 "mibps": 42.5874084525761, 00:22:30.455 "io_failed": 17853, 00:22:30.455 "io_timeout": 0, 00:22:30.455 "avg_latency_us": 10563.27029785724, 00:22:30.455 "min_latency_us": 620.2514285714286, 00:22:30.455 "max_latency_us": 21845.333333333332 00:22:30.455 } 00:22:30.455 ], 00:22:30.455 "core_count": 1 00:22:30.455 } 00:22:30.455 09:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3418800 00:22:30.455 09:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3418800 ']' 00:22:30.455 09:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3418800 00:22:30.455 09:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:30.455 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:30.455 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3418800 00:22:30.455 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:30.455 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:30.455 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3418800' 00:22:30.455 killing process with pid 3418800 00:22:30.455 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3418800 00:22:30.455 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3418800 00:22:30.455 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:30.455 [2024-12-13 09:33:25.969120] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:22:30.455 [2024-12-13 09:33:25.969174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418800 ] 00:22:30.455 [2024-12-13 09:33:26.033126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.456 [2024-12-13 09:33:26.073888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.456 Running I/O for 15 seconds... 00:22:30.456 11184.00 IOPS, 43.69 MiB/s [2024-12-13T08:33:42.822Z] [2024-12-13 09:33:28.040466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.456 [2024-12-13 09:33:28.040932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.456 [2024-12-13 09:33:28.040946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.456 [2024-12-13 09:33:28.040960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.456 [2024-12-13 09:33:28.040975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.040989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.040998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.041004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.041012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.041019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.041027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.041033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.041042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.041049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.041057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.041063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.041071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.041078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.456 [2024-12-13 09:33:28.041086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.456 [2024-12-13 09:33:28.041092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.457 [2024-12-13 09:33:28.041440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.457 [2024-12-13 09:33:28.041676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.457 [2024-12-13 09:33:28.041683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.041989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.041997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.042003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.042017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.042031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.042045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.042059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.042073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.042087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.042101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.042115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.042129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.458 [2024-12-13 09:33:28.042143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.458 [2024-12-13 09:33:28.042172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99136 len:8 PRP1 0x0 PRP2 0x0 00:22:30.458 [2024-12-13 09:33:28.042179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.458 [2024-12-13 09:33:28.042194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.458 [2024-12-13 09:33:28.042200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99144 len:8 PRP1 0x0 PRP2 0x0 00:22:30.458 [2024-12-13 09:33:28.042206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.458 [2024-12-13 09:33:28.042217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.458 [2024-12-13 09:33:28.042222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99152 len:8 PRP1 0x0 PRP2 0x0 00:22:30.458 [2024-12-13 09:33:28.042228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.458 [2024-12-13 09:33:28.042240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.458 [2024-12-13 09:33:28.042245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99160 len:8 PRP1 0x0 PRP2 0x0 00:22:30.458 [2024-12-13 09:33:28.042251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.458 [2024-12-13 09:33:28.042262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.458 [2024-12-13 09:33:28.042267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99168 len:8 PRP1 0x0 PRP2 0x0 00:22:30.458 [2024-12-13 09:33:28.042273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.458 [2024-12-13 09:33:28.042280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.458 [2024-12-13 09:33:28.042284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.458 [2024-12-13 09:33:28.042290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99176 len:8 PRP1 0x0 PRP2 0x0 00:22:30.458 [2024-12-13 09:33:28.042296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:28.042302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.459 [2024-12-13 09:33:28.042307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.459 [2024-12-13 09:33:28.042312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99184 len:8 PRP1 0x0 PRP2 0x0 00:22:30.459 [2024-12-13 09:33:28.042318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:28.042324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.459 [2024-12-13 09:33:28.042329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.459 [2024-12-13 09:33:28.042334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99192 len:8 PRP1 0x0 PRP2 0x0 00:22:30.459 [2024-12-13 09:33:28.042342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:28.042348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.459 [2024-12-13 09:33:28.042353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.459 [2024-12-13 09:33:28.042359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99200 len:8 PRP1 0x0 PRP2 0x0 00:22:30.459 [2024-12-13 09:33:28.042367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:28.042373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.459 [2024-12-13 09:33:28.042378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.459 [2024-12-13 09:33:28.042383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99208 len:8 PRP1 0x0 PRP2 0x0 00:22:30.459 [2024-12-13 09:33:28.042389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:28.042396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.459 [2024-12-13 09:33:28.042400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.459 [2024-12-13 09:33:28.042405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99216 len:8 PRP1 0x0 PRP2 0x0 00:22:30.459 [2024-12-13 09:33:28.042411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:28.042417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.459 [2024-12-13 09:33:28.042422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.459 [2024-12-13 09:33:28.042428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99224 len:8 PRP1 0x0 PRP2 0x0 00:22:30.459 [2024-12-13 09:33:28.042434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:28.042440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.459 [2024-12-13 09:33:28.042445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.459 [2024-12-13 09:33:28.042453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99232 len:8 PRP1 0x0 PRP2 0x0 00:22:30.459 [2024-12-13 09:33:28.042459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:28.042465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.459 [2024-12-13 09:33:28.042470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.459 [2024-12-13 09:33:28.042475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99240 len:8 PRP1 0x0 PRP2 0x0 00:22:30.459 [2024-12-13 09:33:28.042481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:28.053992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.459 [2024-12-13 09:33:28.054000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.459 [2024-12-13 09:33:28.054006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99248 len:8 PRP1 0x0 PRP2 0x0 00:22:30.459 [2024-12-13 09:33:28.054014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:28.054020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.459 [2024-12-13 09:33:28.054025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.459 [2024-12-13 09:33:28.054032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99256 len:8 PRP1 0x0 PRP2 0x0 00:22:30.459 [2024-12-13 09:33:28.054039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:28.054082] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:30.459 [2024-12-13 09:33:28.054106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.459 [2024-12-13 09:33:28.054115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:28.054123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.459 [2024-12-13 09:33:28.054129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:28.054136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.459 [2024-12-13 09:33:28.054142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:28.054149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.459 [2024-12-13 09:33:28.054156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:28.054162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:30.459 [2024-12-13 09:33:28.054199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9715d0 (9): Bad file descriptor 00:22:30.459 [2024-12-13 09:33:28.056972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:30.459 [2024-12-13 09:33:28.084838] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:30.459 10841.50 IOPS, 42.35 MiB/s [2024-12-13T08:33:42.825Z] 10956.00 IOPS, 42.80 MiB/s [2024-12-13T08:33:42.825Z] 11012.00 IOPS, 43.02 MiB/s [2024-12-13T08:33:42.825Z] [2024-12-13 09:33:31.727332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.459 [2024-12-13 09:33:31.727373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.459 [2024-12-13 09:33:31.727396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.459 [2024-12-13 09:33:31.727412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.459 [2024-12-13 09:33:31.727426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.459 [2024-12-13 09:33:31.727441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.459 [2024-12-13 09:33:31.727468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.459 [2024-12-13 09:33:31.727483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.459 [2024-12-13 09:33:31.727498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.459 [2024-12-13 09:33:31.727513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.459 [2024-12-13 09:33:31.727528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.459 [2024-12-13 09:33:31.727542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.459 [2024-12-13 09:33:31.727557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.459 [2024-12-13 09:33:31.727571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.459 [2024-12-13 09:33:31.727588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.459 [2024-12-13 09:33:31.727602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.459 [2024-12-13 09:33:31.727617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.459 [2024-12-13 09:33:31.727625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.459 [2024-12-13 09:33:31.727631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.460 [2024-12-13 09:33:31.727708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.727988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.727994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.728002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.728008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.728018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.728027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.728036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.728043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.728053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.728060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.728068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.728076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.728086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.728094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.728102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.728110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.728121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.728129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.728138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.728146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.728156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.728166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.728177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.460 [2024-12-13 09:33:31.728185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.460 [2024-12-13 09:33:31.728193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.461 [2024-12-13 09:33:31.728857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.461 [2024-12-13 09:33:31.728866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.728873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.728882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.728891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.728900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.728908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.728915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.728922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.728929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.728936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.728945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.728951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.728960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.728966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.728974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.728980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.728989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.728996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.462 [2024-12-13 09:33:31.729024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.462 [2024-12-13 09:33:31.729038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.462 [2024-12-13 09:33:31.729376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99fe90 is same with the state(6) to be set 00:22:30.462 [2024-12-13 09:33:31.729391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.462 [2024-12-13 09:33:31.729396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.462 [2024-12-13 09:33:31.729401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40000 len:8 PRP1 0x0 PRP2 0x0 00:22:30.462 [2024-12-13 09:33:31.729409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729456] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:30.462 [2024-12-13 09:33:31.729480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.462 [2024-12-13 09:33:31.729488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.462 [2024-12-13 09:33:31.729505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.462 [2024-12-13 09:33:31.729518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.462 [2024-12-13 09:33:31.729525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.463 [2024-12-13 09:33:31.729533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:31.729540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:30.463 [2024-12-13 09:33:31.732336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:30.463 [2024-12-13 09:33:31.732365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9715d0 (9): Bad file descriptor 00:22:30.463 [2024-12-13 09:33:31.888686] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:30.463 10666.20 IOPS, 41.66 MiB/s [2024-12-13T08:33:42.829Z] 10741.83 IOPS, 41.96 MiB/s [2024-12-13T08:33:42.829Z] 10802.43 IOPS, 42.20 MiB/s [2024-12-13T08:33:42.829Z] 10880.62 IOPS, 42.50 MiB/s [2024-12-13T08:33:42.829Z] 10921.67 IOPS, 42.66 MiB/s [2024-12-13T08:33:42.829Z] [2024-12-13 09:33:36.156718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.463 [2024-12-13 09:33:36.156968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.156991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.156999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.463 [2024-12-13 09:33:36.157290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.463 [2024-12-13 09:33:36.157298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.464 [2024-12-13 09:33:36.157556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.464 [2024-12-13 09:33:36.157826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-12-13 09:33:36.157833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.157841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.157847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.157855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.157861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.157869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.157875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.157883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.157889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.157897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.157903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.157911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.157917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.157925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.157931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.157939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.157945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.157953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.157959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.157967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.157973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.157983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.157989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.157997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-12-13 09:33:36.158243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.465 [2024-12-13 09:33:36.158284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91144 len:8 PRP1 0x0 PRP2 0x0 00:22:30.465 [2024-12-13 09:33:36.158290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.465 [2024-12-13 09:33:36.158304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.465 [2024-12-13 09:33:36.158310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91152 len:8 PRP1 0x0 PRP2 0x0 00:22:30.465 [2024-12-13 09:33:36.158316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.465 [2024-12-13 09:33:36.158329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.465 [2024-12-13 09:33:36.158334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91160 len:8 PRP1 0x0 PRP2 0x0 00:22:30.465 [2024-12-13 09:33:36.158340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.465 [2024-12-13 09:33:36.158351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.465 [2024-12-13 09:33:36.158356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91168 len:8 PRP1 0x0 PRP2 0x0 00:22:30.465 [2024-12-13 09:33:36.158362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.465 [2024-12-13 09:33:36.158375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.465 [2024-12-13 09:33:36.158380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91176 len:8 PRP1 0x0 PRP2 0x0 00:22:30.465 [2024-12-13 09:33:36.158386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.465 [2024-12-13 09:33:36.158399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.465 [2024-12-13 09:33:36.158404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91184 len:8 PRP1 0x0 PRP2 0x0 00:22:30.465 [2024-12-13 09:33:36.158410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.465 [2024-12-13 09:33:36.158421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.465 [2024-12-13 09:33:36.158426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91192 len:8 PRP1 0x0 PRP2 0x0 00:22:30.465 [2024-12-13 09:33:36.158432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.465 [2024-12-13 09:33:36.158438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.158443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.158452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91200 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.158459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.158464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.158469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.158474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91208 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.158480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.158487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.158491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.158496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91216 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.158502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.158510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.158515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.158520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91224 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.158526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.158532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.158537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.158542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91232 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.158550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.158556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.158560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.158566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91240 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.158572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.158580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.158585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.158590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91248 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.158596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.158602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.158607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.158612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91256 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.158617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.158624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.158629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.158634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91264 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.158640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.158646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.158651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.158656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91272 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.158662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.158668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.158673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.158678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91280 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.158684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.158692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.158697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.169602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91288 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.169616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.169626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.169637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.169646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91296 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.169654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.169662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.169669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.169676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91304 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.169684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.169694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.169700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.169707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91312 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.169715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.169723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.169730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.169737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91320 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.169745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.169754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.169760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.169767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91328 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.169775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.169783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.466 [2024-12-13 09:33:36.169790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.466 [2024-12-13 09:33:36.169797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91336 len:8 PRP1 0x0 PRP2 0x0 00:22:30.466 [2024-12-13 09:33:36.169805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.169854] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:30.466 [2024-12-13 09:33:36.169882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.466 [2024-12-13 09:33:36.169891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.169902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.466 [2024-12-13 09:33:36.169911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.169920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.466 [2024-12-13 09:33:36.169929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.169941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.466 [2024-12-13 09:33:36.169949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.466 [2024-12-13 09:33:36.169958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:30.466 [2024-12-13 09:33:36.169994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9715d0 (9): Bad file descriptor 00:22:30.466 [2024-12-13 09:33:36.173753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:30.466 [2024-12-13 09:33:36.356050] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:30.466 10744.60 IOPS, 41.97 MiB/s [2024-12-13T08:33:42.832Z] 10777.64 IOPS, 42.10 MiB/s [2024-12-13T08:33:42.832Z] 10821.17 IOPS, 42.27 MiB/s [2024-12-13T08:33:42.832Z] 10851.46 IOPS, 42.39 MiB/s [2024-12-13T08:33:42.832Z] 10888.21 IOPS, 42.53 MiB/s 00:22:30.466 Latency(us) 00:22:30.466 [2024-12-13T08:33:42.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.466 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:30.466 Verification LBA range: start 0x0 length 0x4000 00:22:30.466 NVMe0n1 : 15.00 10902.38 42.59 1189.93 0.00 10563.27 620.25 21845.33 00:22:30.466 [2024-12-13T08:33:42.832Z] =================================================================================================================== 00:22:30.466 [2024-12-13T08:33:42.832Z] Total : 10902.38 42.59 1189.93 0.00 10563.27 620.25 21845.33 00:22:30.466 Received shutdown signal, test time was about 15.000000 seconds 00:22:30.466 00:22:30.467 Latency(us) 00:22:30.467 [2024-12-13T08:33:42.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.467 [2024-12-13T08:33:42.833Z] =================================================================================================================== 00:22:30.467 [2024-12-13T08:33:42.833Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3421464 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3421464 /var/tmp/bdevperf.sock 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3421464 ']' 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:30.467 [2024-12-13 09:33:42.625204] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:30.467 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:30.724 [2024-12-13 09:33:42.837826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:30.724 09:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:30.982 NVMe0n1 00:22:30.982 09:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:31.546 00:22:31.546 09:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:31.804 00:22:31.804 09:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:31.804 09:33:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:31.804 09:33:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:32.061 09:33:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:35.337 09:33:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:35.337 09:33:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:35.337 09:33:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:35.337 09:33:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3422362 00:22:35.337 09:33:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3422362 00:22:36.710 { 00:22:36.710 "results": [ 00:22:36.710 { 00:22:36.710 "job": "NVMe0n1", 00:22:36.710 "core_mask": "0x1", 00:22:36.710 "workload": "verify", 00:22:36.710 "status": "finished", 00:22:36.710 "verify_range": { 00:22:36.710 "start": 0, 00:22:36.710 "length": 16384 00:22:36.710 }, 00:22:36.710 "queue_depth": 128, 00:22:36.710 "io_size": 4096, 00:22:36.710 "runtime": 1.005261, 00:22:36.710 "iops": 11148.348538339795, 00:22:36.710 "mibps": 43.54823647788982, 00:22:36.710 "io_failed": 0, 00:22:36.710 "io_timeout": 0, 00:22:36.710 "avg_latency_us": 11441.02899225399, 00:22:36.710 "min_latency_us": 1966.08, 00:22:36.710 "max_latency_us": 9986.438095238096 00:22:36.710 } 00:22:36.710 ], 00:22:36.710 "core_count": 1 00:22:36.710 } 00:22:36.710 09:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:36.710 [2024-12-13 09:33:42.269745] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:22:36.710 [2024-12-13 09:33:42.269794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421464 ] 00:22:36.710 [2024-12-13 09:33:42.332441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.710 [2024-12-13 09:33:42.369457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.710 [2024-12-13 09:33:44.328097] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:36.710 [2024-12-13 09:33:44.328142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.710 [2024-12-13 09:33:44.328153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.710 [2024-12-13 09:33:44.328162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.710 [2024-12-13 09:33:44.328169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.710 [2024-12-13 09:33:44.328176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.710 [2024-12-13 09:33:44.328183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.710 [2024-12-13 09:33:44.328190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.710 [2024-12-13 09:33:44.328196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.710 [2024-12-13 09:33:44.328203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:36.710 [2024-12-13 09:33:44.328228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:36.710 [2024-12-13 09:33:44.328241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a985d0 (9): Bad file descriptor 00:22:36.710 [2024-12-13 09:33:44.339086] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:36.710 Running I/O for 1 seconds... 00:22:36.710 11079.00 IOPS, 43.28 MiB/s 00:22:36.710 Latency(us) 00:22:36.710 [2024-12-13T08:33:49.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.710 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:36.710 Verification LBA range: start 0x0 length 0x4000 00:22:36.710 NVMe0n1 : 1.01 11148.35 43.55 0.00 0.00 11441.03 1966.08 9986.44 00:22:36.710 [2024-12-13T08:33:49.076Z] =================================================================================================================== 00:22:36.710 [2024-12-13T08:33:49.076Z] Total : 11148.35 43.55 0.00 0.00 11441.03 1966.08 9986.44 00:22:36.710 09:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:36.710 09:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:36.710 09:33:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:36.710 09:33:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:36.710 09:33:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:36.968 09:33:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:37.226 09:33:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:40.511 09:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:40.511 09:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:40.511 09:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3421464 00:22:40.511 09:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3421464 ']' 00:22:40.511 09:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3421464 00:22:40.511 09:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:40.511 09:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.511 09:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3421464 00:22:40.511 09:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:40.511 09:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:40.511 09:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3421464' 00:22:40.511 killing process with pid 3421464 00:22:40.511 09:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3421464 00:22:40.511 09:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3421464 00:22:40.769 09:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:40.769 09:33:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:40.769 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:40.769 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:40.769 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:40.769 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:40.769 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:40.769 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:40.770 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:40.770 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:40.770 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:40.770 rmmod nvme_tcp 00:22:40.770 rmmod nvme_fabrics 00:22:40.770 rmmod nvme_keyring 00:22:41.028 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:41.028 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:41.028 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:41.028 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3418526 ']' 00:22:41.028 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3418526 00:22:41.028 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3418526 ']' 00:22:41.028 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3418526 00:22:41.028 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:41.028 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.028 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3418526 00:22:41.028 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:41.028 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:41.028 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3418526' 00:22:41.028 killing process with pid 3418526 00:22:41.028 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3418526 00:22:41.028 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3418526 00:22:41.287 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.287 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.287 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.287 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:41.287 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:41.288 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.288 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.288 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.288 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.288 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.288 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.288 09:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.192 09:33:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:43.192 00:22:43.192 real 0m36.914s 00:22:43.192 user 1m58.379s 00:22:43.192 sys 0m7.485s 00:22:43.192 09:33:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.192 09:33:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:43.192 ************************************ 00:22:43.192 END TEST nvmf_failover 00:22:43.192 ************************************ 00:22:43.192 09:33:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:43.192 09:33:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:43.192 09:33:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.192 09:33:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.452 ************************************ 00:22:43.452 START TEST nvmf_host_discovery 00:22:43.452 ************************************ 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:43.452 * Looking for test storage... 00:22:43.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:43.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.452 --rc genhtml_branch_coverage=1 00:22:43.452 --rc genhtml_function_coverage=1 00:22:43.452 --rc genhtml_legend=1 00:22:43.452 --rc geninfo_all_blocks=1 00:22:43.452 --rc geninfo_unexecuted_blocks=1 00:22:43.452 00:22:43.452 ' 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:43.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.452 --rc genhtml_branch_coverage=1 00:22:43.452 --rc genhtml_function_coverage=1 00:22:43.452 --rc genhtml_legend=1 00:22:43.452 --rc geninfo_all_blocks=1 00:22:43.452 --rc geninfo_unexecuted_blocks=1 00:22:43.452 00:22:43.452 ' 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:43.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.452 --rc genhtml_branch_coverage=1 00:22:43.452 --rc genhtml_function_coverage=1 00:22:43.452 --rc genhtml_legend=1 00:22:43.452 --rc geninfo_all_blocks=1 00:22:43.452 --rc geninfo_unexecuted_blocks=1 00:22:43.452 00:22:43.452 ' 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:43.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.452 --rc genhtml_branch_coverage=1 00:22:43.452 --rc genhtml_function_coverage=1 00:22:43.452 --rc genhtml_legend=1 00:22:43.452 --rc geninfo_all_blocks=1 00:22:43.452 --rc geninfo_unexecuted_blocks=1 00:22:43.452 00:22:43.452 ' 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.452 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:22:43.453 09:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:48.723 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:48.723 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:48.723 Found net devices under 0000:af:00.0: cvl_0_0 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:48.723 Found net devices under 0000:af:00.1: cvl_0_1 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:48.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:22:48.723 00:22:48.723 --- 10.0.0.2 ping statistics --- 00:22:48.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.723 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:22:48.723 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:22:48.723 00:22:48.724 --- 10.0.0.1 ping statistics --- 00:22:48.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.724 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3426631 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3426631 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3426631 ']' 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.724 09:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.724 [2024-12-13 09:34:00.989728] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:22:48.724 [2024-12-13 09:34:00.989770] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.724 [2024-12-13 09:34:01.054368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.982 [2024-12-13 09:34:01.092863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.982 [2024-12-13 09:34:01.092895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.982 [2024-12-13 09:34:01.092903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.982 [2024-12-13 09:34:01.092909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.982 [2024-12-13 09:34:01.092914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.982 [2024-12-13 09:34:01.093415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.982 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.982 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:48.982 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:48.982 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:48.982 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.982 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.982 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:48.982 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.982 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.982 [2024-12-13 09:34:01.224499] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.982 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.982 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:48.982 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.983 [2024-12-13 09:34:01.232669] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.983 null0 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.983 null1 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3426734 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3426734 /tmp/host.sock 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3426734 ']' 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:48.983 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.983 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.983 [2024-12-13 09:34:01.306401] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:22:48.983 [2024-12-13 09:34:01.306440] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3426734 ] 00:22:49.241 [2024-12-13 09:34:01.368440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.241 [2024-12-13 09:34:01.409853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:49.241 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.500 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.501 [2024-12-13 09:34:01.830187] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.501 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.501 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:49.501 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:49.501 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.501 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.501 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:49.501 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:49.501 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:49.501 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.759 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.760 09:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.760 09:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:49.760 09:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:50.326 [2024-12-13 09:34:02.530776] bdev_nvme.c:7517:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:50.326 [2024-12-13 09:34:02.530795] bdev_nvme.c:7603:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:50.326 [2024-12-13 09:34:02.530807] bdev_nvme.c:7480:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:50.326 [2024-12-13 09:34:02.617054] bdev_nvme.c:7446:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:50.326 [2024-12-13 09:34:02.671564] bdev_nvme.c:5662:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:22:50.326 [2024-12-13 09:34:02.672327] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21b2fa0:1 started. 00:22:50.326 [2024-12-13 09:34:02.673575] bdev_nvme.c:7336:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:50.326 [2024-12-13 09:34:02.673591] bdev_nvme.c:7295:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:50.326 [2024-12-13 09:34:02.679788] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21b2fa0 was disconnected and freed. delete nvme_qpair. 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:50.893 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.894 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:51.152 [2024-12-13 09:34:03.412102] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21b3320:1 started. 00:22:51.153 [2024-12-13 09:34:03.421444] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21b3320 was disconnected and freed. delete nvme_qpair. 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.153 [2024-12-13 09:34:03.486655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:51.153 [2024-12-13 09:34:03.487360] bdev_nvme.c:7499:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:51.153 [2024-12-13 09:34:03.487380] bdev_nvme.c:7480:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:51.153 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.412 [2024-12-13 09:34:03.574962] bdev_nvme.c:7441:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:51.412 09:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:51.412 [2024-12-13 09:34:03.677573] bdev_nvme.c:5662:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:22:51.412 [2024-12-13 09:34:03.677608] bdev_nvme.c:7336:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:51.412 [2024-12-13 09:34:03.677619] bdev_nvme.c:7295:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:51.412 [2024-12-13 09:34:03.677624] bdev_nvme.c:7295:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.347 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.606 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:52.606 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.607 [2024-12-13 09:34:04.722539] bdev_nvme.c:7499:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:52.607 [2024-12-13 09:34:04.722564] bdev_nvme.c:7480:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:52.607 [2024-12-13 09:34:04.726362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.607 [2024-12-13 09:34:04.726380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.607 [2024-12-13 09:34:04.726388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.607 [2024-12-13 09:34:04.726395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.607 [2024-12-13 09:34:04.726402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.607 [2024-12-13 09:34:04.726409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.607 [2024-12-13 09:34:04.726416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.607 [2024-12-13 09:34:04.726422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.607 [2024-12-13 09:34:04.726428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183410 is same with the state(6) to be set 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:52.607 [2024-12-13 09:34:04.736375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183410 (9): Bad file descriptor 00:22:52.607 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.607 [2024-12-13 09:34:04.746410] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:52.607 [2024-12-13 09:34:04.746422] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:52.607 [2024-12-13 09:34:04.746428] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:52.607 [2024-12-13 09:34:04.746432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:52.607 [2024-12-13 09:34:04.746453] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:52.607 [2024-12-13 09:34:04.746655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.607 [2024-12-13 09:34:04.746671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183410 with addr=10.0.0.2, port=4420 00:22:52.607 [2024-12-13 09:34:04.746679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183410 is same with the state(6) to be set 00:22:52.607 [2024-12-13 09:34:04.746690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183410 (9): Bad file descriptor 00:22:52.607 [2024-12-13 09:34:04.746706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:52.607 [2024-12-13 09:34:04.746713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:52.607 [2024-12-13 09:34:04.746721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:52.607 [2024-12-13 09:34:04.746727] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:52.607 [2024-12-13 09:34:04.746732] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:52.607 [2024-12-13 09:34:04.746736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:52.607 [2024-12-13 09:34:04.756478] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:52.607 [2024-12-13 09:34:04.756488] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:52.607 [2024-12-13 09:34:04.756492] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:52.607 [2024-12-13 09:34:04.756496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:52.607 [2024-12-13 09:34:04.756509] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:52.607 [2024-12-13 09:34:04.756680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.607 [2024-12-13 09:34:04.756691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183410 with addr=10.0.0.2, port=4420 00:22:52.607 [2024-12-13 09:34:04.756699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183410 is same with the state(6) to be set 00:22:52.607 [2024-12-13 09:34:04.756708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183410 (9): Bad file descriptor 00:22:52.607 [2024-12-13 09:34:04.756724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:52.607 [2024-12-13 09:34:04.756731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:52.607 [2024-12-13 09:34:04.756737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:52.607 [2024-12-13 09:34:04.756743] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:52.607 [2024-12-13 09:34:04.756747] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:52.607 [2024-12-13 09:34:04.756751] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:52.607 [2024-12-13 09:34:04.766540] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:52.607 [2024-12-13 09:34:04.766551] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:52.607 [2024-12-13 09:34:04.766555] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:52.607 [2024-12-13 09:34:04.766559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:52.607 [2024-12-13 09:34:04.766572] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:52.607 [2024-12-13 09:34:04.766805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.607 [2024-12-13 09:34:04.766817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183410 with addr=10.0.0.2, port=4420 00:22:52.607 [2024-12-13 09:34:04.766824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183410 is same with the state(6) to be set 00:22:52.607 [2024-12-13 09:34:04.766835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183410 (9): Bad file descriptor 00:22:52.607 [2024-12-13 09:34:04.766876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:52.607 [2024-12-13 09:34:04.766884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:52.607 [2024-12-13 09:34:04.766890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:52.607 [2024-12-13 09:34:04.766896] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:52.607 [2024-12-13 09:34:04.766901] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:52.607 [2024-12-13 09:34:04.766904] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:52.607 [2024-12-13 09:34:04.776602] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:52.607 [2024-12-13 09:34:04.776614] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:52.607 [2024-12-13 09:34:04.776617] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:52.607 [2024-12-13 09:34:04.776621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:52.607 [2024-12-13 09:34:04.776634] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:52.607 [2024-12-13 09:34:04.776900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.608 [2024-12-13 09:34:04.776912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183410 with addr=10.0.0.2, port=4420 00:22:52.608 [2024-12-13 09:34:04.776919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183410 is same with the state(6) to be set 00:22:52.608 [2024-12-13 09:34:04.776929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183410 (9): Bad file descriptor 00:22:52.608 [2024-12-13 09:34:04.776950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:52.608 [2024-12-13 09:34:04.776957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:52.608 [2024-12-13 09:34:04.776964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:52.608 [2024-12-13 09:34:04.776969] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:52.608 [2024-12-13 09:34:04.776974] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:52.608 [2024-12-13 09:34:04.776978] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.608 [2024-12-13 09:34:04.786665] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:52.608 [2024-12-13 09:34:04.786678] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:52.608 [2024-12-13 09:34:04.786683] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:52.608 [2024-12-13 09:34:04.786687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:52.608 [2024-12-13 09:34:04.786700] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:52.608 [2024-12-13 09:34:04.786938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.608 [2024-12-13 09:34:04.786951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183410 with addr=10.0.0.2, port=4420 00:22:52.608 [2024-12-13 09:34:04.786958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183410 is same with the state(6) to be set 00:22:52.608 [2024-12-13 09:34:04.786969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183410 (9): Bad file descriptor 00:22:52.608 [2024-12-13 09:34:04.786978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:52.608 [2024-12-13 09:34:04.786984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:52.608 [2024-12-13 09:34:04.786991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:52.608 [2024-12-13 09:34:04.786997] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:52.608 [2024-12-13 09:34:04.787001] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:52.608 [2024-12-13 09:34:04.787005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:52.608 [2024-12-13 09:34:04.796731] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:52.608 [2024-12-13 09:34:04.796742] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:52.608 [2024-12-13 09:34:04.796745] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:52.608 [2024-12-13 09:34:04.796749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:52.608 [2024-12-13 09:34:04.796761] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:52.608 [2024-12-13 09:34:04.797003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.608 [2024-12-13 09:34:04.797015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183410 with addr=10.0.0.2, port=4420 00:22:52.608 [2024-12-13 09:34:04.797022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183410 is same with the state(6) to be set 00:22:52.608 [2024-12-13 09:34:04.797038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183410 (9): Bad file descriptor 00:22:52.608 [2024-12-13 09:34:04.797053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:52.608 [2024-12-13 09:34:04.797059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:52.608 [2024-12-13 09:34:04.797065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:52.608 [2024-12-13 09:34:04.797070] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:52.608 [2024-12-13 09:34:04.797075] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:52.608 [2024-12-13 09:34:04.797078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:52.608 [2024-12-13 09:34:04.806791] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:52.608 [2024-12-13 09:34:04.806801] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:52.608 [2024-12-13 09:34:04.806805] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:52.608 [2024-12-13 09:34:04.806809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:52.608 [2024-12-13 09:34:04.806820] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:52.608 [2024-12-13 09:34:04.807063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.608 [2024-12-13 09:34:04.807074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183410 with addr=10.0.0.2, port=4420 00:22:52.608 [2024-12-13 09:34:04.807082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183410 is same with the state(6) to be set 00:22:52.608 [2024-12-13 09:34:04.807091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183410 (9): Bad file descriptor 00:22:52.608 [2024-12-13 09:34:04.807100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:52.608 [2024-12-13 09:34:04.807106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:52.608 [2024-12-13 09:34:04.807113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:52.608 [2024-12-13 09:34:04.807118] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:52.608 [2024-12-13 09:34:04.807122] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:52.608 [2024-12-13 09:34:04.807126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:52.608 [2024-12-13 09:34:04.808670] bdev_nvme.c:7304:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:52.608 [2024-12-13 09:34:04.808685] bdev_nvme.c:7295:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:52.608 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:52.609 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.871 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:52.871 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.871 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:52.871 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:52.871 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.871 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.871 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:52.871 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:52.871 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.871 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:52.871 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:52.871 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.871 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:52.871 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.871 09:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.871 09:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.805 [2024-12-13 09:34:06.140594] bdev_nvme.c:7517:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:53.805 [2024-12-13 09:34:06.140611] bdev_nvme.c:7603:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:53.805 [2024-12-13 09:34:06.140621] bdev_nvme.c:7480:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:54.063 [2024-12-13 09:34:06.226873] bdev_nvme.c:7446:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:54.063 [2024-12-13 09:34:06.326585] bdev_nvme.c:5662:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:22:54.063 [2024-12-13 09:34:06.327115] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2180d00:1 started. 00:22:54.063 [2024-12-13 09:34:06.328697] bdev_nvme.c:7336:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:54.063 [2024-12-13 09:34:06.328722] bdev_nvme.c:7295:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.063 [2024-12-13 09:34:06.329930] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2180d00 was disconnected and freed. delete nvme_qpair. 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.063 request: 00:22:54.063 { 00:22:54.063 "name": "nvme", 00:22:54.063 "trtype": "tcp", 00:22:54.063 "traddr": "10.0.0.2", 00:22:54.063 "adrfam": "ipv4", 00:22:54.063 "trsvcid": "8009", 00:22:54.063 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:54.063 "wait_for_attach": true, 00:22:54.063 "method": "bdev_nvme_start_discovery", 00:22:54.063 "req_id": 1 00:22:54.063 } 00:22:54.063 Got JSON-RPC error response 00:22:54.063 response: 00:22:54.063 { 00:22:54.063 "code": -17, 00:22:54.063 "message": "File exists" 00:22:54.063 } 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.063 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.321 request: 00:22:54.321 { 00:22:54.321 "name": "nvme_second", 00:22:54.321 "trtype": "tcp", 00:22:54.321 "traddr": "10.0.0.2", 00:22:54.321 "adrfam": "ipv4", 00:22:54.321 "trsvcid": "8009", 00:22:54.321 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:54.321 "wait_for_attach": true, 00:22:54.321 "method": "bdev_nvme_start_discovery", 00:22:54.321 "req_id": 1 00:22:54.321 } 00:22:54.321 Got JSON-RPC error response 00:22:54.321 response: 00:22:54.321 { 00:22:54.321 "code": -17, 00:22:54.321 "message": "File exists" 00:22:54.321 } 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:54.321 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:54.322 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:54.322 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.322 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:54.322 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.322 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:54.322 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.322 09:34:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.258 [2024-12-13 09:34:07.564086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.258 [2024-12-13 09:34:07.564113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2184130 with addr=10.0.0.2, port=8010 00:22:55.258 [2024-12-13 09:34:07.564130] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:55.258 [2024-12-13 09:34:07.564153] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:55.258 [2024-12-13 09:34:07.564159] bdev_nvme.c:7585:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:56.634 [2024-12-13 09:34:08.566574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.634 [2024-12-13 09:34:08.566598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219d320 with addr=10.0.0.2, port=8010 00:22:56.634 [2024-12-13 09:34:08.566611] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:56.634 [2024-12-13 09:34:08.566617] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:56.634 [2024-12-13 09:34:08.566639] bdev_nvme.c:7585:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:57.569 [2024-12-13 09:34:09.568720] bdev_nvme.c:7560:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:57.569 request: 00:22:57.569 { 00:22:57.569 "name": "nvme_second", 00:22:57.569 "trtype": "tcp", 00:22:57.569 "traddr": "10.0.0.2", 00:22:57.569 "adrfam": "ipv4", 00:22:57.569 "trsvcid": "8010", 00:22:57.569 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:57.569 "wait_for_attach": false, 00:22:57.569 "attach_timeout_ms": 3000, 00:22:57.569 "method": "bdev_nvme_start_discovery", 00:22:57.569 "req_id": 1 00:22:57.569 } 00:22:57.569 Got JSON-RPC error response 00:22:57.569 response: 00:22:57.569 { 00:22:57.569 "code": -110, 00:22:57.569 "message": "Connection timed out" 00:22:57.569 } 00:22:57.569 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:57.569 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:57.569 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.569 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.569 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.569 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:57.569 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:57.569 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:57.569 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.569 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:57.569 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.569 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:57.569 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.569 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:57.569 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3426734 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:57.570 rmmod nvme_tcp 00:22:57.570 rmmod nvme_fabrics 00:22:57.570 rmmod nvme_keyring 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3426631 ']' 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3426631 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3426631 ']' 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3426631 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3426631 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3426631' 00:22:57.570 killing process with pid 3426631 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3426631 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3426631 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.570 09:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.102 09:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:00.102 00:23:00.102 real 0m16.406s 00:23:00.102 user 0m20.155s 00:23:00.102 sys 0m5.240s 00:23:00.102 09:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.102 09:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.102 ************************************ 00:23:00.102 END TEST nvmf_host_discovery 00:23:00.102 ************************************ 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.102 ************************************ 00:23:00.102 START TEST nvmf_host_multipath_status 00:23:00.102 ************************************ 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:00.102 * Looking for test storage... 00:23:00.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.102 --rc genhtml_branch_coverage=1 00:23:00.102 --rc genhtml_function_coverage=1 00:23:00.102 --rc genhtml_legend=1 00:23:00.102 --rc geninfo_all_blocks=1 00:23:00.102 --rc geninfo_unexecuted_blocks=1 00:23:00.102 00:23:00.102 ' 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.102 --rc genhtml_branch_coverage=1 00:23:00.102 --rc genhtml_function_coverage=1 00:23:00.102 --rc genhtml_legend=1 00:23:00.102 --rc geninfo_all_blocks=1 00:23:00.102 --rc geninfo_unexecuted_blocks=1 00:23:00.102 00:23:00.102 ' 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.102 --rc genhtml_branch_coverage=1 00:23:00.102 --rc genhtml_function_coverage=1 00:23:00.102 --rc genhtml_legend=1 00:23:00.102 --rc geninfo_all_blocks=1 00:23:00.102 --rc geninfo_unexecuted_blocks=1 00:23:00.102 00:23:00.102 ' 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.102 --rc genhtml_branch_coverage=1 00:23:00.102 --rc genhtml_function_coverage=1 00:23:00.102 --rc genhtml_legend=1 00:23:00.102 --rc geninfo_all_blocks=1 00:23:00.102 --rc geninfo_unexecuted_blocks=1 00:23:00.102 00:23:00.102 ' 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.102 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:00.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:00.103 09:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:05.367 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:05.368 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:05.368 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:05.368 Found net devices under 0000:af:00.0: cvl_0_0 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:05.368 Found net devices under 0000:af:00.1: cvl_0_1 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:05.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:23:05.368 00:23:05.368 --- 10.0.0.2 ping statistics --- 00:23:05.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.368 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:23:05.368 00:23:05.368 --- 10.0.0.1 ping statistics --- 00:23:05.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.368 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3431505 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3431505 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3431505 ']' 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.368 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:05.368 [2024-12-13 09:34:17.456723] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:23:05.368 [2024-12-13 09:34:17.456772] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.368 [2024-12-13 09:34:17.524881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:05.368 [2024-12-13 09:34:17.567573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.368 [2024-12-13 09:34:17.567609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.368 [2024-12-13 09:34:17.567617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.368 [2024-12-13 09:34:17.567623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.369 [2024-12-13 09:34:17.567628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.369 [2024-12-13 09:34:17.568776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.369 [2024-12-13 09:34:17.568780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.369 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.369 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:05.369 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:05.369 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:05.369 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:05.369 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.369 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3431505 00:23:05.369 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:05.626 [2024-12-13 09:34:17.866045] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.626 09:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:05.884 Malloc0 00:23:05.884 09:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:06.142 09:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:06.142 09:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.400 [2024-12-13 09:34:18.631091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.400 09:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:06.659 [2024-12-13 09:34:18.819586] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:06.659 09:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3431774 00:23:06.659 09:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:06.659 09:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.659 09:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3431774 /var/tmp/bdevperf.sock 00:23:06.659 09:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3431774 ']' 00:23:06.659 09:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.659 09:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.659 09:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.659 09:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.659 09:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:06.917 09:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.917 09:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:06.917 09:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:06.917 09:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:07.569 Nvme0n1 00:23:07.569 09:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:07.569 Nvme0n1 00:23:07.875 09:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:07.875 09:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:09.773 09:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:09.773 09:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:09.773 09:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:10.031 09:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:10.965 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:10.965 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:10.965 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.965 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:11.224 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.224 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:11.224 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.224 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:11.481 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:11.481 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:11.481 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.481 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:11.739 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.739 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:11.739 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.739 09:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:11.997 09:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.997 09:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:11.997 09:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.997 09:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:11.997 09:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.997 09:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:11.997 09:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.997 09:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:12.255 09:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.255 09:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:12.256 09:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:12.514 09:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:12.772 09:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:13.704 09:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:13.704 09:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:13.704 09:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.704 09:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:13.962 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:13.962 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:13.962 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.962 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:13.962 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.962 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:13.962 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.962 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:14.220 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.220 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:14.220 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.220 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:14.477 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.477 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:14.477 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.477 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:14.736 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.736 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:14.736 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.736 09:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:14.994 09:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.994 09:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:14.994 09:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:14.994 09:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:15.253 09:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:16.628 09:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:16.628 09:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:16.628 09:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.628 09:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:16.628 09:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.628 09:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:16.628 09:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.628 09:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:16.628 09:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:16.628 09:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:16.628 09:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.628 09:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:16.886 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.886 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:16.886 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.886 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:17.144 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.144 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:17.144 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.144 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:17.402 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.402 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:17.402 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:17.403 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.403 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.403 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:17.403 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:17.660 09:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:17.919 09:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:18.855 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:18.855 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:18.855 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.855 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:19.114 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.114 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:19.114 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:19.114 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.372 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:19.372 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:19.372 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.373 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:19.631 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.631 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:19.631 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.631 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:19.631 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.631 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:19.631 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.631 09:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:19.889 09:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.890 09:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:19.890 09:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.890 09:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:20.148 09:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:20.148 09:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:20.148 09:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:20.407 09:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:20.666 09:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:21.601 09:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:21.601 09:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:21.601 09:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.601 09:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:21.859 09:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:21.859 09:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:21.859 09:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.859 09:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:21.859 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:21.859 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:21.859 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.859 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:22.118 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.118 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:22.118 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.118 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:22.376 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.376 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:22.376 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.376 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:22.634 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:22.634 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:22.634 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.634 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:22.634 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:22.634 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:22.634 09:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:22.897 09:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:23.154 09:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:24.088 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:24.088 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:24.088 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.088 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:24.346 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:24.346 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:24.346 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:24.346 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.604 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.604 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:24.604 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.604 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:24.604 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.604 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:24.604 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.604 09:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:24.862 09:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.862 09:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:24.862 09:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.862 09:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:25.120 09:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:25.120 09:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:25.120 09:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.120 09:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:25.378 09:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.378 09:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:25.635 09:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:25.635 09:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:25.636 09:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:25.893 09:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:26.828 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:26.828 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:26.828 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.828 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:27.086 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.086 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:27.086 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.086 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:27.344 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.344 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:27.344 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.344 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:27.602 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.602 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:27.602 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.602 09:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:27.860 09:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.860 09:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:27.860 09:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.860 09:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:27.860 09:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.860 09:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:27.860 09:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.860 09:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:28.119 09:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.119 09:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:28.119 09:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:28.378 09:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:28.636 09:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:29.570 09:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:29.570 09:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:29.570 09:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.570 09:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:29.828 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:29.828 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:29.828 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.828 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:30.086 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.086 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:30.086 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.086 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:30.344 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.344 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:30.344 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.344 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:30.344 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.344 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:30.344 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.344 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:30.603 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.603 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:30.603 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.603 09:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:30.861 09:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.861 09:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:30.861 09:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:31.119 09:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:31.378 09:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:32.314 09:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:32.314 09:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:32.314 09:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.314 09:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:32.572 09:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.572 09:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:32.572 09:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.572 09:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:32.572 09:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.572 09:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:32.572 09:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.572 09:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:32.831 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.831 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:32.831 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.831 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:33.088 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.088 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:33.088 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.088 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:33.346 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.346 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:33.346 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.346 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:33.604 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.604 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:33.604 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:33.604 09:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:33.861 09:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:35.235 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:35.235 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:35.235 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.235 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:35.235 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.235 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:35.235 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.235 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:35.235 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:35.235 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:35.235 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.235 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:35.493 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.493 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:35.493 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.493 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:35.751 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.751 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:35.751 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.751 09:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:36.009 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.009 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:36.009 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.009 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:36.301 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:36.301 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3431774 00:23:36.301 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3431774 ']' 00:23:36.301 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3431774 00:23:36.301 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:36.301 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.301 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3431774 00:23:36.301 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:36.301 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:36.301 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3431774' 00:23:36.301 killing process with pid 3431774 00:23:36.301 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3431774 00:23:36.301 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3431774 00:23:36.301 { 00:23:36.301 "results": [ 00:23:36.301 { 00:23:36.301 "job": "Nvme0n1", 00:23:36.301 "core_mask": "0x4", 00:23:36.301 "workload": "verify", 00:23:36.301 "status": "terminated", 00:23:36.301 "verify_range": { 00:23:36.301 "start": 0, 00:23:36.301 "length": 16384 00:23:36.301 }, 00:23:36.301 "queue_depth": 128, 00:23:36.301 "io_size": 4096, 00:23:36.301 "runtime": 28.394637, 00:23:36.301 "iops": 10553.718295465444, 00:23:36.301 "mibps": 41.22546209166189, 00:23:36.301 "io_failed": 0, 00:23:36.301 "io_timeout": 0, 00:23:36.301 "avg_latency_us": 12105.908729304349, 00:23:36.301 "min_latency_us": 353.0361904761905, 00:23:36.301 "max_latency_us": 3083812.083809524 00:23:36.301 } 00:23:36.301 ], 00:23:36.301 "core_count": 1 00:23:36.301 } 00:23:36.301 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3431774 00:23:36.301 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:36.301 [2024-12-13 09:34:18.861688] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:23:36.301 [2024-12-13 09:34:18.861739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3431774 ] 00:23:36.301 [2024-12-13 09:34:18.918575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.301 [2024-12-13 09:34:18.957571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.301 Running I/O for 90 seconds... 00:23:36.301 11001.00 IOPS, 42.97 MiB/s [2024-12-13T08:34:48.667Z] 11183.50 IOPS, 43.69 MiB/s [2024-12-13T08:34:48.667Z] 11326.33 IOPS, 44.24 MiB/s [2024-12-13T08:34:48.667Z] 11371.00 IOPS, 44.42 MiB/s [2024-12-13T08:34:48.667Z] 11326.20 IOPS, 44.24 MiB/s [2024-12-13T08:34:48.667Z] 11327.00 IOPS, 44.25 MiB/s [2024-12-13T08:34:48.667Z] 11353.43 IOPS, 44.35 MiB/s [2024-12-13T08:34:48.667Z] 11344.62 IOPS, 44.31 MiB/s [2024-12-13T08:34:48.667Z] 11323.78 IOPS, 44.23 MiB/s [2024-12-13T08:34:48.667Z] 11321.90 IOPS, 44.23 MiB/s [2024-12-13T08:34:48.667Z] 11316.55 IOPS, 44.21 MiB/s [2024-12-13T08:34:48.667Z] 11309.50 IOPS, 44.18 MiB/s [2024-12-13T08:34:48.667Z] [2024-12-13 09:34:32.562556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.301 [2024-12-13 09:34:32.562589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:36.301 [2024-12-13 09:34:32.562609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.301 [2024-12-13 09:34:32.562617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:36.301 [2024-12-13 09:34:32.562630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.301 [2024-12-13 09:34:32.562637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:36.301 [2024-12-13 09:34:32.562650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.301 [2024-12-13 09:34:32.562657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:36.301 [2024-12-13 09:34:32.562669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.301 [2024-12-13 09:34:32.562676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:36.301 [2024-12-13 09:34:32.562688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.301 [2024-12-13 09:34:32.562694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.562706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.562713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.562725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.562732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.562744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.562751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.562763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.302 [2024-12-13 09:34:32.562776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.562788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.302 [2024-12-13 09:34:32.562795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.562807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.302 [2024-12-13 09:34:32.562814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.562826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.302 [2024-12-13 09:34:32.562833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.562847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.302 [2024-12-13 09:34:32.562854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.562866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.302 [2024-12-13 09:34:32.562873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.562885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.302 [2024-12-13 09:34:32.562892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.562904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.302 [2024-12-13 09:34:32.562911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.562924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.302 [2024-12-13 09:34:32.562931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.562943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.302 [2024-12-13 09:34:32.562950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.562962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.302 [2024-12-13 09:34:32.562969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.562981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.302 [2024-12-13 09:34:32.562988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.302 [2024-12-13 09:34:32.563393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.302 [2024-12-13 09:34:32.563783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:36.302 [2024-12-13 09:34:32.563795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.563802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.563813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.563820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.563832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.563838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.563851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.563857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.563871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.563878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.563890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.563896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.563908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.563915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.563927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.563934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.563946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.563953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.563965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.563972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.563984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.563990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.303 [2024-12-13 09:34:32.564927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:36.303 [2024-12-13 09:34:32.564941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.564949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.564960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.564967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.564979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.564986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.304 [2024-12-13 09:34:32.565051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.304 [2024-12-13 09:34:32.565070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.304 [2024-12-13 09:34:32.565088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.304 [2024-12-13 09:34:32.565107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.304 [2024-12-13 09:34:32.565126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.304 [2024-12-13 09:34:32.565145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.304 [2024-12-13 09:34:32.565164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.304 [2024-12-13 09:34:32.565585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.304 [2024-12-13 09:34:32.565604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:36.304 [2024-12-13 09:34:32.565617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.304 [2024-12-13 09:34:32.565624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.565643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.565662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.565681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.565700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.565720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.565741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.565759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.565778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.565797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.565816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.565835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.565853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.565872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.565891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.305 [2024-12-13 09:34:32.565910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.305 [2024-12-13 09:34:32.565930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.305 [2024-12-13 09:34:32.565949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.565962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.305 [2024-12-13 09:34:32.565969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.305 [2024-12-13 09:34:32.566420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.305 [2024-12-13 09:34:32.566441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.305 [2024-12-13 09:34:32.566472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.305 [2024-12-13 09:34:32.566494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.305 [2024-12-13 09:34:32.566513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.566532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.566551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.566570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.566589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.566607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.566626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.566644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.566665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.566684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.566703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.566721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.566740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:36.305 [2024-12-13 09:34:32.566752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.305 [2024-12-13 09:34:32.566758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.566770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.566777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.566790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.566797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.566809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.566816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.566828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.566834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.566846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.566853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.566865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.566871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.566883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.566891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.566903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.566909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.566921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.566928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.566940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.566946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.566958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.566965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.566978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.566985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.566997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.306 [2024-12-13 09:34:32.567699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:36.306 [2024-12-13 09:34:32.567711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.567718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.567732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.567739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.567751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.567758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.567772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.567778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.567790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.567797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.567809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.567816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.567828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.567834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.567847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.567853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.567865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.567872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.567884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.567890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.567902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.567909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.567921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.567927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.567939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.567946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.567961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.567968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.567980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.567986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.567998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.307 [2024-12-13 09:34:32.568290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.568302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-12-13 09:34:32.576347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.576364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-12-13 09:34:32.576371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.576385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-12-13 09:34:32.576391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.576403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-12-13 09:34:32.576410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.576423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-12-13 09:34:32.576429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:36.307 [2024-12-13 09:34:32.576441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-12-13 09:34:32.576452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.576464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-12-13 09:34:32.576473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.576485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.576492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.576503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.576510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.576522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.576529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.576541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.576548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.576559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.576566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.576578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.576585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.308 [2024-12-13 09:34:32.577314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-12-13 09:34:32.577332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-12-13 09:34:32.577351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-12-13 09:34:32.577371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-12-13 09:34:32.577390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-12-13 09:34:32.577408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-12-13 09:34:32.577427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-12-13 09:34:32.577445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-12-13 09:34:32.577469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-12-13 09:34:32.577488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-12-13 09:34:32.577507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-12-13 09:34:32.577525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:36.308 [2024-12-13 09:34:32.577537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-12-13 09:34:32.577543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.577562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.577580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.577600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.577619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.309 [2024-12-13 09:34:32.577637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.309 [2024-12-13 09:34:32.577656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.309 [2024-12-13 09:34:32.577675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.309 [2024-12-13 09:34:32.577693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.309 [2024-12-13 09:34:32.577711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.309 [2024-12-13 09:34:32.577730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.309 [2024-12-13 09:34:32.577748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.309 [2024-12-13 09:34:32.577767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.309 [2024-12-13 09:34:32.577785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.577803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.577826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.577845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.577863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.577882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.577901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.577919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.577938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.577956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.577975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.577986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.577993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.578005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.578012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.578024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.309 [2024-12-13 09:34:32.578030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.578042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.309 [2024-12-13 09:34:32.578049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.578062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.309 [2024-12-13 09:34:32.578069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.578080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.309 [2024-12-13 09:34:32.578087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:36.309 [2024-12-13 09:34:32.578099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.309 [2024-12-13 09:34:32.578105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.578516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.578524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.579148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.579161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.579175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.579183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.579195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.579202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.579214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.579220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.579232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.579239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.579252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.579258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.579270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.310 [2024-12-13 09:34:32.579277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:36.310 [2024-12-13 09:34:32.579289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:36.311 [2024-12-13 09:34:32.579805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.311 [2024-12-13 09:34:32.579812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.579824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.312 [2024-12-13 09:34:32.579830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.579843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.312 [2024-12-13 09:34:32.579850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.579862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.312 [2024-12-13 09:34:32.579868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.579881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.312 [2024-12-13 09:34:32.579887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.579899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.312 [2024-12-13 09:34:32.579906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.579918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.312 [2024-12-13 09:34:32.579924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.579937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.312 [2024-12-13 09:34:32.579943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.579955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.579962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.579973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.579980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.579992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.579999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.312 [2024-12-13 09:34:32.580741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.312 [2024-12-13 09:34:32.580760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:36.312 [2024-12-13 09:34:32.580774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.580780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.580793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.580800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.580812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.580819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.580830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.580837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.580849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.580856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.580867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.580874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.580886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.580893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.580905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.580912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.580925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.580932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.580946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.580953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.580965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.580972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.580984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.580990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.581009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.581028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.581046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.313 [2024-12-13 09:34:32.581065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.313 [2024-12-13 09:34:32.581085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.313 [2024-12-13 09:34:32.581104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.313 [2024-12-13 09:34:32.581123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.313 [2024-12-13 09:34:32.581141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.313 [2024-12-13 09:34:32.581162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.313 [2024-12-13 09:34:32.581493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.313 [2024-12-13 09:34:32.581513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.313 [2024-12-13 09:34:32.581532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.581551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.581572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.581592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.581611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.581630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.313 [2024-12-13 09:34:32.581648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:36.313 [2024-12-13 09:34:32.581660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.314 [2024-12-13 09:34:32.581667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.314 [2024-12-13 09:34:32.581685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.314 [2024-12-13 09:34:32.581708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.314 [2024-12-13 09:34:32.581726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.314 [2024-12-13 09:34:32.581745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.314 [2024-12-13 09:34:32.581766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.314 [2024-12-13 09:34:32.581785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.581803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.581822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.581841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.581859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.581879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.581898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.581917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.581937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.581955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.581974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.581986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.581993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.582006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.582013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.582025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.582033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.582045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.582054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.582066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.582072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.582084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.582091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.582102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.582109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.587145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.587154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.587167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.587174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.587186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.587194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.587208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.587214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.587226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.314 [2024-12-13 09:34:32.587233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:36.314 [2024-12-13 09:34:32.587245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.587987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.587993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.588005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.588011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.588023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.588030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.588042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.588048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.588065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.588071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.588083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.588090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.588102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.588109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.588121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.588127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.588139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.588146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.588158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.588165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.588177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.588183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.588195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.588202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:36.315 [2024-12-13 09:34:32.588214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.315 [2024-12-13 09:34:32.588220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.316 [2024-12-13 09:34:32.588389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.316 [2024-12-13 09:34:32.588407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.316 [2024-12-13 09:34:32.588426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.316 [2024-12-13 09:34:32.588444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.316 [2024-12-13 09:34:32.588468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.316 [2024-12-13 09:34:32.588487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.316 [2024-12-13 09:34:32.588505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.316 [2024-12-13 09:34:32.588768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.316 [2024-12-13 09:34:32.588780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.317 [2024-12-13 09:34:32.588786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.588798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.317 [2024-12-13 09:34:32.588805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.588817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.317 [2024-12-13 09:34:32.588825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.588837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.317 [2024-12-13 09:34:32.588844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.588856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.317 [2024-12-13 09:34:32.588862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.588875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.317 [2024-12-13 09:34:32.588881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.588893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.317 [2024-12-13 09:34:32.588900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.588912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.317 [2024-12-13 09:34:32.588919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.588931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.588938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.588950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.588957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.588969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.588976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.588989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.588996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.589008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.589014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.589026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.589033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.589045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.589052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.589064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.589071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.589083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.589089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.589101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.589108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.589120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.589126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.589138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.589145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.589157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.589163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.589176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.589182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.589194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.589201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.589213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.317 [2024-12-13 09:34:32.589221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.589234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.317 [2024-12-13 09:34:32.589240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.589252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.317 [2024-12-13 09:34:32.589259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:36.317 [2024-12-13 09:34:32.589272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.317 [2024-12-13 09:34:32.589279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.589290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.589297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.589309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.589316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.589960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.589973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.589988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.589995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.590014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.590033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.318 [2024-12-13 09:34:32.590052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.318 [2024-12-13 09:34:32.590070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.318 [2024-12-13 09:34:32.590091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.318 [2024-12-13 09:34:32.590111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.318 [2024-12-13 09:34:32.590129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.318 [2024-12-13 09:34:32.590148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.318 [2024-12-13 09:34:32.590167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.318 [2024-12-13 09:34:32.590185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.318 [2024-12-13 09:34:32.590204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.318 [2024-12-13 09:34:32.590223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.318 [2024-12-13 09:34:32.590241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.318 [2024-12-13 09:34:32.590260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.318 [2024-12-13 09:34:32.590279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.590297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.590316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.590336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.590355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.590373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.590392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.590411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.590430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.590454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.318 [2024-12-13 09:34:32.590473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:36.318 [2024-12-13 09:34:32.590485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.590806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.590813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.591177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.591187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.591201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.591208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.591220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.591226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.591238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.591245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.591257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.591264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.591276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.591282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.591295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.591301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.591313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.591320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.591332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.591338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.591350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.591357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.591369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.591378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.591390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.591396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:36.319 [2024-12-13 09:34:32.591409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.319 [2024-12-13 09:34:32.591415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.320 [2024-12-13 09:34:32.591813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.320 [2024-12-13 09:34:32.591831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.320 [2024-12-13 09:34:32.591852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.320 [2024-12-13 09:34:32.591870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.320 [2024-12-13 09:34:32.591889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.320 [2024-12-13 09:34:32.591907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.320 [2024-12-13 09:34:32.591926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.320 [2024-12-13 09:34:32.591981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:36.320 [2024-12-13 09:34:32.591993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.321 [2024-12-13 09:34:32.592794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.321 [2024-12-13 09:34:32.592813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.321 [2024-12-13 09:34:32.592832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.321 [2024-12-13 09:34:32.592853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.321 [2024-12-13 09:34:32.592871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.321 [2024-12-13 09:34:32.592890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.321 [2024-12-13 09:34:32.592909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.321 [2024-12-13 09:34:32.592928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.321 [2024-12-13 09:34:32.592946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.321 [2024-12-13 09:34:32.592965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.321 [2024-12-13 09:34:32.592983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.592999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.321 [2024-12-13 09:34:32.593006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:36.321 [2024-12-13 09:34:32.593018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.321 [2024-12-13 09:34:32.593024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.322 [2024-12-13 09:34:32.593118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.322 [2024-12-13 09:34:32.593137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.322 [2024-12-13 09:34:32.593157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.322 [2024-12-13 09:34:32.593175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.322 [2024-12-13 09:34:32.593194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.322 [2024-12-13 09:34:32.593213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.322 [2024-12-13 09:34:32.593233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.322 [2024-12-13 09:34:32.593251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.322 [2024-12-13 09:34:32.593270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.322 [2024-12-13 09:34:32.593520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:36.322 [2024-12-13 09:34:32.593532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.322 [2024-12-13 09:34:32.593539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.593551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.593557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.593569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.593576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.593588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.593594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.593606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.593613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.593625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.593632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.593644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.593651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:36.323 [2024-12-13 09:34:32.594498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.323 [2024-12-13 09:34:32.594505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.594801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.594808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.595147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.595156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.595169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.595176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.595189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.595195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.595207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.595214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.595226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.595233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.595245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.595252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.595264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.595270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.595282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.595289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.595301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.595307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.595319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.595326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.595338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.595347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.595359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.595365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:36.324 [2024-12-13 09:34:32.595378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.324 [2024-12-13 09:34:32.595384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.595403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.325 [2024-12-13 09:34:32.595422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.325 [2024-12-13 09:34:32.595441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.325 [2024-12-13 09:34:32.595465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.325 [2024-12-13 09:34:32.595485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.325 [2024-12-13 09:34:32.595504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.325 [2024-12-13 09:34:32.595525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.325 [2024-12-13 09:34:32.595544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.595563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.595583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.595790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.595810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.595829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.595848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.595867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.595887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.595907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.595926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.595945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.595964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.595984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.595996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.596003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.596016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.596023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.596035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.596042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.596054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.596060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.596072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.596079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.596091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.596097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.596110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.596116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.596128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.596135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.596147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.325 [2024-12-13 09:34:32.596153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.596165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.325 [2024-12-13 09:34:32.596172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.596184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.325 [2024-12-13 09:34:32.596191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:36.325 [2024-12-13 09:34:32.596203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.326 [2024-12-13 09:34:32.596480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.326 [2024-12-13 09:34:32.596499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.326 [2024-12-13 09:34:32.596518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.326 [2024-12-13 09:34:32.596537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.326 [2024-12-13 09:34:32.596870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.326 [2024-12-13 09:34:32.596890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.326 [2024-12-13 09:34:32.596909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.326 [2024-12-13 09:34:32.596928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.326 [2024-12-13 09:34:32.596946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.596983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.596995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.597002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.597014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.597020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.597036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.597043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.597055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.597061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.597073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.597080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:36.326 [2024-12-13 09:34:32.597092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.326 [2024-12-13 09:34:32.597098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.327 [2024-12-13 09:34:32.597119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.327 [2024-12-13 09:34:32.597138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.327 [2024-12-13 09:34:32.597159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.327 [2024-12-13 09:34:32.597177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.327 [2024-12-13 09:34:32.597196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:36.327 [2024-12-13 09:34:32.597968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.327 [2024-12-13 09:34:32.597974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.597986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.597993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:36.328 [2024-12-13 09:34:32.598462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.328 [2024-12-13 09:34:32.598469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.598745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.598754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.598769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.598776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.598789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.598798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.598810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.598818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.598830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.598837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.598849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.598856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.598868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.598874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.598886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.598893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.598905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.598912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.598923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.598930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.598942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.329 [2024-12-13 09:34:32.598949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.598961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.329 [2024-12-13 09:34:32.598967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.598980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.329 [2024-12-13 09:34:32.598986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.598998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.329 [2024-12-13 09:34:32.599007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.329 [2024-12-13 09:34:32.599025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.329 [2024-12-13 09:34:32.599044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.329 [2024-12-13 09:34:32.599063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.599082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.599101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.599120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.599140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.599159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.599178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.599381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.599401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.599424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.599443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.599467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.599486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.329 [2024-12-13 09:34:32.599505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.329 [2024-12-13 09:34:32.599517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.330 [2024-12-13 09:34:32.599524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.330 [2024-12-13 09:34:32.599544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.330 [2024-12-13 09:34:32.599564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.330 [2024-12-13 09:34:32.599583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.330 [2024-12-13 09:34:32.599602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.330 [2024-12-13 09:34:32.599621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.330 [2024-12-13 09:34:32.599640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.330 [2024-12-13 09:34:32.599659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.330 [2024-12-13 09:34:32.599679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.599979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.330 [2024-12-13 09:34:32.599986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.600240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.330 [2024-12-13 09:34:32.600249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.600261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.330 [2024-12-13 09:34:32.600269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.600280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.330 [2024-12-13 09:34:32.600287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.600299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.330 [2024-12-13 09:34:32.600306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.600318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.330 [2024-12-13 09:34:32.600325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:36.330 [2024-12-13 09:34:32.600337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.600344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.600362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.600383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.600402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.331 [2024-12-13 09:34:32.600421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.331 [2024-12-13 09:34:32.600440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.331 [2024-12-13 09:34:32.600466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.331 [2024-12-13 09:34:32.600485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.331 [2024-12-13 09:34:32.600503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.331 [2024-12-13 09:34:32.600522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.331 [2024-12-13 09:34:32.600541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.331 [2024-12-13 09:34:32.600560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.331 [2024-12-13 09:34:32.600578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.331 [2024-12-13 09:34:32.600597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.331 [2024-12-13 09:34:32.600617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.331 [2024-12-13 09:34:32.600636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.331 [2024-12-13 09:34:32.600655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.600673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.600692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.600711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.600730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.600742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.604105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.604121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.604129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.604141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.604148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.604160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.604166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.604178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.604185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.604197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.604206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.604218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.604225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.604238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.604245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.604256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.331 [2024-12-13 09:34:32.604263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:36.331 [2024-12-13 09:34:32.604275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.604991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.604998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.605010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.605016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.605028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.605035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.605047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.605054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.605065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.605072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.605084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.605091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.605103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.605110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.605121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.605128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:36.332 [2024-12-13 09:34:32.605140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.332 [2024-12-13 09:34:32.605147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.333 [2024-12-13 09:34:32.605495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.333 [2024-12-13 09:34:32.605514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.333 [2024-12-13 09:34:32.605532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.333 [2024-12-13 09:34:32.605551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.333 [2024-12-13 09:34:32.605570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.333 [2024-12-13 09:34:32.605588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.333 [2024-12-13 09:34:32.605607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.333 [2024-12-13 09:34:32.605715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.333 [2024-12-13 09:34:32.605722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.605734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.605740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.605753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.605760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.605772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.605779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.605791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.605798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.605810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.605817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.605828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.605835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.605847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.605853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.605866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.605872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.605885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.605891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.605906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.605913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.605925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.605932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.605944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.605951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.605963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.605970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.605982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.605988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.606001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.606009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.606022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.334 [2024-12-13 09:34:32.606029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.606041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.334 [2024-12-13 09:34:32.606047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.606059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.334 [2024-12-13 09:34:32.606066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.606078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.334 [2024-12-13 09:34:32.606085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.606097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.334 [2024-12-13 09:34:32.606104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.606116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.334 [2024-12-13 09:34:32.606122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.606134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.334 [2024-12-13 09:34:32.606142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.606155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.334 [2024-12-13 09:34:32.606161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.606173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.334 [2024-12-13 09:34:32.606180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.606192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.334 [2024-12-13 09:34:32.606199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:36.334 [2024-12-13 09:34:32.606211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.334 [2024-12-13 09:34:32.606218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.606230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.606237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.606249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.606255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.606267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.606274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.606286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.606293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.606305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.606312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.607036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.335 [2024-12-13 09:34:32.607057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.335 [2024-12-13 09:34:32.607079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.335 [2024-12-13 09:34:32.607098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.335 [2024-12-13 09:34:32.607117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.335 [2024-12-13 09:34:32.607136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.335 [2024-12-13 09:34:32.607154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.335 [2024-12-13 09:34:32.607173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.335 [2024-12-13 09:34:32.607191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.335 [2024-12-13 09:34:32.607210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.607229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.607247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.607266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.607284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.607303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.607324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.607343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.607362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.607381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.607401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.607420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.607438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.335 [2024-12-13 09:34:32.607463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.335 [2024-12-13 09:34:32.607481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.335 [2024-12-13 09:34:32.607500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:36.335 [2024-12-13 09:34:32.607512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.607519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.607531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.607538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.607551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.607558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.607570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.607577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.607589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.607596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.607608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.607615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.607627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.607634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.607646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.607652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.607665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.607672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.607686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.607694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.607706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.607713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.607726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.607733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.336 [2024-12-13 09:34:32.608425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:36.336 [2024-12-13 09:34:32.608601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.608986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.608993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:36.337 [2024-12-13 09:34:32.609004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.337 [2024-12-13 09:34:32.609011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.338 [2024-12-13 09:34:32.609184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.338 [2024-12-13 09:34:32.609203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.338 [2024-12-13 09:34:32.609221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.338 [2024-12-13 09:34:32.609240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.338 [2024-12-13 09:34:32.609260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.338 [2024-12-13 09:34:32.609279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.338 [2024-12-13 09:34:32.609298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.338 [2024-12-13 09:34:32.609581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.338 [2024-12-13 09:34:32.609588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.609600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.339 [2024-12-13 09:34:32.609607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.339 [2024-12-13 09:34:32.610054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.339 [2024-12-13 09:34:32.610075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.339 [2024-12-13 09:34:32.610094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.339 [2024-12-13 09:34:32.610113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.339 [2024-12-13 09:34:32.610131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.339 [2024-12-13 09:34:32.610150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.339 [2024-12-13 09:34:32.610465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.339 [2024-12-13 09:34:32.610484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:36.339 [2024-12-13 09:34:32.610496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.339 [2024-12-13 09:34:32.610503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.340 [2024-12-13 09:34:32.610522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.340 [2024-12-13 09:34:32.610540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.340 [2024-12-13 09:34:32.610559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.340 [2024-12-13 09:34:32.610578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.340 [2024-12-13 09:34:32.610596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.340 [2024-12-13 09:34:32.610615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.340 [2024-12-13 09:34:32.610635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.340 [2024-12-13 09:34:32.610654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.340 [2024-12-13 09:34:32.610674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.340 [2024-12-13 09:34:32.610694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.340 [2024-12-13 09:34:32.610712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.340 [2024-12-13 09:34:32.610732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.340 [2024-12-13 09:34:32.610751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.340 [2024-12-13 09:34:32.610769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.340 [2024-12-13 09:34:32.610788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.340 [2024-12-13 09:34:32.610807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.340 [2024-12-13 09:34:32.610826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.340 [2024-12-13 09:34:32.610844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.340 [2024-12-13 09:34:32.610864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.340 [2024-12-13 09:34:32.610883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.610895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.340 [2024-12-13 09:34:32.610902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.611234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.340 [2024-12-13 09:34:32.611243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.611256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.340 [2024-12-13 09:34:32.611263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.611275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.340 [2024-12-13 09:34:32.611282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:36.340 [2024-12-13 09:34:32.611293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.341 [2024-12-13 09:34:32.611804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:36.341 [2024-12-13 09:34:32.611816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.611823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.611834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.611841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.611853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.611859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.611871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.611878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.611890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.611898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.611910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.611917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.611929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.611936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.611948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.611954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.611966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.611973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.611985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.611992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.342 [2024-12-13 09:34:32.612680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:36.342 [2024-12-13 09:34:32.612692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.612699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.612712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.612719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.612732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.343 [2024-12-13 09:34:32.612740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.612752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.343 [2024-12-13 09:34:32.612759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.612771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.343 [2024-12-13 09:34:32.612778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.612789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.343 [2024-12-13 09:34:32.612796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.612808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.343 [2024-12-13 09:34:32.612815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.612826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.343 [2024-12-13 09:34:32.612833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.612845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.343 [2024-12-13 09:34:32.612852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.612864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.612871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.612883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.612890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.612902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.612910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.612922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.612929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:36.343 [2024-12-13 09:34:32.613497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.343 [2024-12-13 09:34:32.613504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.344 [2024-12-13 09:34:32.613523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.344 [2024-12-13 09:34:32.613542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.344 [2024-12-13 09:34:32.613853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.344 [2024-12-13 09:34:32.613873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.344 [2024-12-13 09:34:32.613894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.344 [2024-12-13 09:34:32.613913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.344 [2024-12-13 09:34:32.613933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.344 [2024-12-13 09:34:32.613952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.613964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.344 [2024-12-13 09:34:32.613970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:36.344 [2024-12-13 09:34:32.614291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.344 [2024-12-13 09:34:32.614301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.345 [2024-12-13 09:34:32.614358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.345 [2024-12-13 09:34:32.614377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.345 [2024-12-13 09:34:32.614395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.345 [2024-12-13 09:34:32.614414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.345 [2024-12-13 09:34:32.614433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.345 [2024-12-13 09:34:32.614457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.345 [2024-12-13 09:34:32.614479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.345 [2024-12-13 09:34:32.614497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.345 [2024-12-13 09:34:32.614516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.345 [2024-12-13 09:34:32.614535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.345 [2024-12-13 09:34:32.614554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.345 [2024-12-13 09:34:32.614573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.345 [2024-12-13 09:34:32.614591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:36.345 [2024-12-13 09:34:32.614888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.345 [2024-12-13 09:34:32.614895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.614907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.614914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.614926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.614934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.614946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.614953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.614965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.614971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.614984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.614991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:36.346 [2024-12-13 09:34:32.615594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.346 [2024-12-13 09:34:32.615601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.615613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.615620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.615632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.615638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.615651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.615657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.615669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.615676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.615689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.615696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.615711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.615718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.615730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.615737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.615749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.615756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.615767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.615774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.615786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.615793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.615805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.615812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.615823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.615830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.615842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.615849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.615861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.615868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.615882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.615889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.616166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.616175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.616188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.616197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.616210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.616217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.616229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.616236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.616248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.616255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.616267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.616274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.616286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.616293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.616305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.347 [2024-12-13 09:34:32.616312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.616324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.347 [2024-12-13 09:34:32.616330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.616343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.347 [2024-12-13 09:34:32.616350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.616362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.347 [2024-12-13 09:34:32.616369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.616381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.347 [2024-12-13 09:34:32.616388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.616400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.347 [2024-12-13 09:34:32.616407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:36.347 [2024-12-13 09:34:32.616418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.347 [2024-12-13 09:34:32.616429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.348 [2024-12-13 09:34:32.616455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.616986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.616998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.348 [2024-12-13 09:34:32.617005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.617017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.348 [2024-12-13 09:34:32.617024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.617037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.348 [2024-12-13 09:34:32.617044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.617056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.348 [2024-12-13 09:34:32.617063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.617075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.348 [2024-12-13 09:34:32.617082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.617094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.348 [2024-12-13 09:34:32.617102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.617114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.348 [2024-12-13 09:34:32.617120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.617132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.348 [2024-12-13 09:34:32.617139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.617151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.348 [2024-12-13 09:34:32.617158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.348 [2024-12-13 09:34:32.617170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.349 [2024-12-13 09:34:32.617328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.349 [2024-12-13 09:34:32.617347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.349 [2024-12-13 09:34:32.617692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.349 [2024-12-13 09:34:32.617711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.349 [2024-12-13 09:34:32.617730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.349 [2024-12-13 09:34:32.617748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.349 [2024-12-13 09:34:32.617766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.349 [2024-12-13 09:34:32.617784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.349 [2024-12-13 09:34:32.617802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.617988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.617994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.618006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.618012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.618024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.618031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.618043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.349 [2024-12-13 09:34:32.618051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.618062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.349 [2024-12-13 09:34:32.618069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:36.349 [2024-12-13 09:34:32.618080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.350 [2024-12-13 09:34:32.618932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:36.350 [2024-12-13 09:34:32.618947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.618954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.618968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.618977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.618992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.618998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.351 [2024-12-13 09:34:32.619592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.351 [2024-12-13 09:34:32.619615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.351 [2024-12-13 09:34:32.619638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.351 [2024-12-13 09:34:32.619660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.351 [2024-12-13 09:34:32.619683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.351 [2024-12-13 09:34:32.619706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.351 [2024-12-13 09:34:32.619728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.351 [2024-12-13 09:34:32.619773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.351 [2024-12-13 09:34:32.619789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.619796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.619812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.619819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.352 [2024-12-13 09:34:32.620661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.352 [2024-12-13 09:34:32.620686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.352 [2024-12-13 09:34:32.620711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.352 [2024-12-13 09:34:32.620735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.352 [2024-12-13 09:34:32.620759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.352 [2024-12-13 09:34:32.620783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.352 [2024-12-13 09:34:32.620808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.352 [2024-12-13 09:34:32.620832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.352 [2024-12-13 09:34:32.620858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.352 [2024-12-13 09:34:32.620883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.352 [2024-12-13 09:34:32.620907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.352 [2024-12-13 09:34:32.620931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:36.352 [2024-12-13 09:34:32.620949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.352 [2024-12-13 09:34:32.620956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:32.620973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.353 [2024-12-13 09:34:32.620980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:32.620997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.353 [2024-12-13 09:34:32.621004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:32.621021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.353 [2024-12-13 09:34:32.621028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:32.621046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.353 [2024-12-13 09:34:32.621052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:32.621070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:32.621077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:32.621094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:32.621101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:36.353 10920.00 IOPS, 42.66 MiB/s [2024-12-13T08:34:48.719Z] 10140.00 IOPS, 39.61 MiB/s [2024-12-13T08:34:48.719Z] 9464.00 IOPS, 36.97 MiB/s [2024-12-13T08:34:48.719Z] 9157.62 IOPS, 35.77 MiB/s [2024-12-13T08:34:48.719Z] 9299.82 IOPS, 36.33 MiB/s [2024-12-13T08:34:48.719Z] 9433.83 IOPS, 36.85 MiB/s [2024-12-13T08:34:48.719Z] 9656.95 IOPS, 37.72 MiB/s [2024-12-13T08:34:48.719Z] 9848.85 IOPS, 38.47 MiB/s [2024-12-13T08:34:48.719Z] 9985.81 IOPS, 39.01 MiB/s [2024-12-13T08:34:48.719Z] 10056.45 IOPS, 39.28 MiB/s [2024-12-13T08:34:48.719Z] 10124.52 IOPS, 39.55 MiB/s [2024-12-13T08:34:48.719Z] 10222.46 IOPS, 39.93 MiB/s [2024-12-13T08:34:48.719Z] 10353.48 IOPS, 40.44 MiB/s [2024-12-13T08:34:48.719Z] 10471.69 IOPS, 40.91 MiB/s [2024-12-13T08:34:48.719Z] [2024-12-13 09:34:46.136351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.353 [2024-12-13 09:34:46.136980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.353 [2024-12-13 09:34:46.136987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.136998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.354 [2024-12-13 09:34:46.137368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.354 [2024-12-13 09:34:46.137387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.354 [2024-12-13 09:34:46.137406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.354 [2024-12-13 09:34:46.137567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.137985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.137997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.138004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.138017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.354 [2024-12-13 09:34:46.138024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:36.354 [2024-12-13 09:34:46.138036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.355 [2024-12-13 09:34:46.138043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:36.355 [2024-12-13 09:34:46.138055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.355 [2024-12-13 09:34:46.138062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:36.355 [2024-12-13 09:34:46.138074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.355 [2024-12-13 09:34:46.138081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:36.355 [2024-12-13 09:34:46.138093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.355 [2024-12-13 09:34:46.138099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:36.355 [2024-12-13 09:34:46.138111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.355 [2024-12-13 09:34:46.138118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:36.355 [2024-12-13 09:34:46.138130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.355 [2024-12-13 09:34:46.138137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:36.355 [2024-12-13 09:34:46.138149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.355 [2024-12-13 09:34:46.138157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:36.355 [2024-12-13 09:34:46.138171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.355 [2024-12-13 09:34:46.138178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:36.355 [2024-12-13 09:34:46.138190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.355 [2024-12-13 09:34:46.138197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:36.355 [2024-12-13 09:34:46.138209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.355 [2024-12-13 09:34:46.138216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:36.355 [2024-12-13 09:34:46.138229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.355 [2024-12-13 09:34:46.138236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:36.355 [2024-12-13 09:34:46.138248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.355 [2024-12-13 09:34:46.138255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:36.355 [2024-12-13 09:34:46.138267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.355 [2024-12-13 09:34:46.138275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:36.355 10513.04 IOPS, 41.07 MiB/s [2024-12-13T08:34:48.721Z] 10543.57 IOPS, 41.19 MiB/s [2024-12-13T08:34:48.721Z] Received shutdown signal, test time was about 28.395261 seconds 00:23:36.355 00:23:36.355 Latency(us) 00:23:36.355 [2024-12-13T08:34:48.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.355 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:36.355 Verification LBA range: start 0x0 length 0x4000 00:23:36.355 Nvme0n1 : 28.39 10553.72 41.23 0.00 0.00 12105.91 353.04 3083812.08 00:23:36.355 [2024-12-13T08:34:48.721Z] =================================================================================================================== 00:23:36.355 [2024-12-13T08:34:48.721Z] Total : 10553.72 41.23 0.00 0.00 12105.91 353.04 3083812.08 00:23:36.355 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:36.614 rmmod nvme_tcp 00:23:36.614 rmmod nvme_fabrics 00:23:36.614 rmmod nvme_keyring 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3431505 ']' 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3431505 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3431505 ']' 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3431505 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3431505 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3431505' 00:23:36.614 killing process with pid 3431505 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3431505 00:23:36.614 09:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3431505 00:23:36.873 09:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:36.873 09:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:36.873 09:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:36.873 09:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:36.873 09:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:36.873 09:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:36.873 09:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:36.873 09:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.873 09:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:36.873 09:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.873 09:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.873 09:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:39.411 00:23:39.411 real 0m39.115s 00:23:39.411 user 1m47.635s 00:23:39.411 sys 0m10.660s 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:39.411 ************************************ 00:23:39.411 END TEST nvmf_host_multipath_status 00:23:39.411 ************************************ 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.411 ************************************ 00:23:39.411 START TEST nvmf_discovery_remove_ifc 00:23:39.411 ************************************ 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:39.411 * Looking for test storage... 00:23:39.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.411 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:39.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.412 --rc genhtml_branch_coverage=1 00:23:39.412 --rc genhtml_function_coverage=1 00:23:39.412 --rc genhtml_legend=1 00:23:39.412 --rc geninfo_all_blocks=1 00:23:39.412 --rc geninfo_unexecuted_blocks=1 00:23:39.412 00:23:39.412 ' 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:39.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.412 --rc genhtml_branch_coverage=1 00:23:39.412 --rc genhtml_function_coverage=1 00:23:39.412 --rc genhtml_legend=1 00:23:39.412 --rc geninfo_all_blocks=1 00:23:39.412 --rc geninfo_unexecuted_blocks=1 00:23:39.412 00:23:39.412 ' 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:39.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.412 --rc genhtml_branch_coverage=1 00:23:39.412 --rc genhtml_function_coverage=1 00:23:39.412 --rc genhtml_legend=1 00:23:39.412 --rc geninfo_all_blocks=1 00:23:39.412 --rc geninfo_unexecuted_blocks=1 00:23:39.412 00:23:39.412 ' 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:39.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.412 --rc genhtml_branch_coverage=1 00:23:39.412 --rc genhtml_function_coverage=1 00:23:39.412 --rc genhtml_legend=1 00:23:39.412 --rc geninfo_all_blocks=1 00:23:39.412 --rc geninfo_unexecuted_blocks=1 00:23:39.412 00:23:39.412 ' 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:39.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:23:39.412 09:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.680 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.680 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:23:44.680 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:44.680 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:44.680 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:44.680 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:44.681 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:44.681 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:44.681 Found net devices under 0000:af:00.0: cvl_0_0 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:44.681 Found net devices under 0000:af:00.1: cvl_0_1 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:44.681 09:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.681 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:44.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:23:44.940 00:23:44.940 --- 10.0.0.2 ping statistics --- 00:23:44.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.940 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:23:44.940 00:23:44.940 --- 10.0.0.1 ping statistics --- 00:23:44.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.940 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3440307 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3440307 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3440307 ']' 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.940 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.198 [2024-12-13 09:34:57.320919] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:23:45.198 [2024-12-13 09:34:57.320964] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.198 [2024-12-13 09:34:57.387111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.198 [2024-12-13 09:34:57.427030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.198 [2024-12-13 09:34:57.427065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.198 [2024-12-13 09:34:57.427073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.198 [2024-12-13 09:34:57.427079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.198 [2024-12-13 09:34:57.427085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.198 [2024-12-13 09:34:57.427580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.198 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.198 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:45.198 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:45.198 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:45.198 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.198 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.198 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:45.198 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.199 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.457 [2024-12-13 09:34:57.567327] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.457 [2024-12-13 09:34:57.575511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:45.457 null0 00:23:45.457 [2024-12-13 09:34:57.607490] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3440330 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3440330 /tmp/host.sock 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3440330 ']' 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:45.457 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.457 [2024-12-13 09:34:57.673876] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:23:45.457 [2024-12-13 09:34:57.673915] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3440330 ] 00:23:45.457 [2024-12-13 09:34:57.736120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.457 [2024-12-13 09:34:57.775795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.457 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.716 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.716 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:45.716 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.716 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.716 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.716 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:45.716 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.716 09:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:46.649 [2024-12-13 09:34:58.965597] bdev_nvme.c:7517:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:46.649 [2024-12-13 09:34:58.965619] bdev_nvme.c:7603:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:46.649 [2024-12-13 09:34:58.965637] bdev_nvme.c:7480:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:46.907 [2024-12-13 09:34:59.053897] bdev_nvme.c:7446:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:46.907 [2024-12-13 09:34:59.113513] bdev_nvme.c:5662:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:46.907 [2024-12-13 09:34:59.114242] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x176eb50:1 started. 00:23:46.907 [2024-12-13 09:34:59.115587] bdev_nvme.c:8313:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:46.907 [2024-12-13 09:34:59.115626] bdev_nvme.c:8313:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:46.907 [2024-12-13 09:34:59.115645] bdev_nvme.c:8313:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:46.907 [2024-12-13 09:34:59.115657] bdev_nvme.c:7336:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:46.907 [2024-12-13 09:34:59.115676] bdev_nvme.c:7295:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:46.907 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.907 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:46.907 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:46.907 [2024-12-13 09:34:59.122706] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x176eb50 was disconnected and freed. delete nvme_qpair. 00:23:46.907 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.907 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:46.907 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.907 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:46.907 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:46.907 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:46.907 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.907 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:46.907 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:46.907 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:46.907 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:46.907 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:46.908 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.908 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:46.908 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.908 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:46.908 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:46.908 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:47.165 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.165 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:47.165 09:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:48.098 09:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:48.098 09:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:48.098 09:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:48.098 09:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.098 09:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:48.098 09:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:48.098 09:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:48.098 09:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.098 09:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:48.098 09:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:49.032 09:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:49.032 09:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:49.032 09:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:49.032 09:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.032 09:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:49.032 09:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:49.032 09:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:49.032 09:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.370 09:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:49.370 09:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:50.361 09:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:50.361 09:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.361 09:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:50.361 09:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.361 09:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:50.361 09:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.361 09:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:50.361 09:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.361 09:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:50.361 09:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:51.293 09:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:51.293 09:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.293 09:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:51.293 09:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.293 09:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:51.293 09:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:51.293 09:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:51.293 09:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.293 09:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:51.293 09:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:52.226 09:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:52.226 09:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.226 09:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:52.226 09:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.226 09:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:52.226 09:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:52.226 09:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:52.226 09:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.226 [2024-12-13 09:35:04.557106] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:52.226 [2024-12-13 09:35:04.557150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.226 [2024-12-13 09:35:04.557160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.226 [2024-12-13 09:35:04.557169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.226 [2024-12-13 09:35:04.557176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.226 [2024-12-13 09:35:04.557183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.226 [2024-12-13 09:35:04.557189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.226 [2024-12-13 09:35:04.557195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.226 [2024-12-13 09:35:04.557202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.226 [2024-12-13 09:35:04.557208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.226 [2024-12-13 09:35:04.557215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.226 [2024-12-13 09:35:04.557221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b310 is same with the state(6) to be set 00:23:52.226 09:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:52.226 09:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:52.226 [2024-12-13 09:35:04.567126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b310 (9): Bad file descriptor 00:23:52.226 [2024-12-13 09:35:04.577162] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:52.226 [2024-12-13 09:35:04.577172] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:52.226 [2024-12-13 09:35:04.577178] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:52.226 [2024-12-13 09:35:04.577182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:52.226 [2024-12-13 09:35:04.577204] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:53.598 09:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:53.598 09:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.598 09:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:53.598 09:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.598 09:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:53.598 09:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:53.598 09:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:53.598 [2024-12-13 09:35:05.606501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:53.598 [2024-12-13 09:35:05.606553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b310 with addr=10.0.0.2, port=4420 00:23:53.598 [2024-12-13 09:35:05.606572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b310 is same with the state(6) to be set 00:23:53.598 [2024-12-13 09:35:05.606612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b310 (9): Bad file descriptor 00:23:53.598 [2024-12-13 09:35:05.607077] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:53.598 [2024-12-13 09:35:05.607107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:53.598 [2024-12-13 09:35:05.607119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:53.598 [2024-12-13 09:35:05.607130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:53.598 [2024-12-13 09:35:05.607141] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:53.598 [2024-12-13 09:35:05.607149] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:53.598 [2024-12-13 09:35:05.607156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:53.598 [2024-12-13 09:35:05.607166] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:53.598 [2024-12-13 09:35:05.607174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:53.598 09:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.598 09:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:53.598 09:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:54.534 [2024-12-13 09:35:06.609645] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:54.534 [2024-12-13 09:35:06.609667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:54.534 [2024-12-13 09:35:06.609678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:54.534 [2024-12-13 09:35:06.609685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:54.534 [2024-12-13 09:35:06.609691] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:54.535 [2024-12-13 09:35:06.609697] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:54.535 [2024-12-13 09:35:06.609702] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:54.535 [2024-12-13 09:35:06.609706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:54.535 [2024-12-13 09:35:06.609727] bdev_nvme.c:7268:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:54.535 [2024-12-13 09:35:06.609748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.535 [2024-12-13 09:35:06.609757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.535 [2024-12-13 09:35:06.609766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.535 [2024-12-13 09:35:06.609772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.535 [2024-12-13 09:35:06.609779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.535 [2024-12-13 09:35:06.609785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.535 [2024-12-13 09:35:06.609792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.535 [2024-12-13 09:35:06.609801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.535 [2024-12-13 09:35:06.609808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.535 [2024-12-13 09:35:06.609814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.535 [2024-12-13 09:35:06.609821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:54.535 [2024-12-13 09:35:06.609844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173aa60 (9): Bad file descriptor 00:23:54.535 [2024-12-13 09:35:06.610841] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:54.535 [2024-12-13 09:35:06.610851] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:54.535 09:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:55.467 09:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:55.467 09:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.467 09:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:55.467 09:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:55.467 09:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.467 09:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:55.467 09:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.467 09:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.724 09:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:55.725 09:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:56.657 [2024-12-13 09:35:08.668609] bdev_nvme.c:7517:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:56.657 [2024-12-13 09:35:08.668626] bdev_nvme.c:7603:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:56.657 [2024-12-13 09:35:08.668642] bdev_nvme.c:7480:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:56.657 [2024-12-13 09:35:08.756897] bdev_nvme.c:7446:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:56.657 09:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:56.657 09:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.657 09:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:56.657 09:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.657 09:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:56.657 09:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:56.657 09:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:56.657 09:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.657 09:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:56.657 09:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:56.657 [2024-12-13 09:35:08.937858] bdev_nvme.c:5662:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:23:56.657 [2024-12-13 09:35:08.938484] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x174d650:1 started. 00:23:56.657 [2024-12-13 09:35:08.939471] bdev_nvme.c:8313:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:56.657 [2024-12-13 09:35:08.939501] bdev_nvme.c:8313:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:56.657 [2024-12-13 09:35:08.939518] bdev_nvme.c:8313:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:56.657 [2024-12-13 09:35:08.939530] bdev_nvme.c:7336:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:56.657 [2024-12-13 09:35:08.939536] bdev_nvme.c:7295:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:56.657 [2024-12-13 09:35:08.947615] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x174d650 was disconnected and freed. delete nvme_qpair. 00:23:57.589 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:57.589 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.589 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:57.589 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.589 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:57.589 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:57.589 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:57.589 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.589 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:57.589 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:57.589 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3440330 00:23:57.847 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3440330 ']' 00:23:57.847 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3440330 00:23:57.847 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:57.847 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:57.847 09:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3440330 00:23:57.847 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:57.847 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:57.847 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3440330' 00:23:57.847 killing process with pid 3440330 00:23:57.847 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3440330 00:23:57.847 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3440330 00:23:57.847 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:57.847 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:57.847 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:57.847 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:57.847 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:57.847 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:57.847 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:57.847 rmmod nvme_tcp 00:23:57.847 rmmod nvme_fabrics 00:23:57.847 rmmod nvme_keyring 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3440307 ']' 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3440307 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3440307 ']' 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3440307 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3440307 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3440307' 00:23:58.105 killing process with pid 3440307 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3440307 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3440307 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.105 09:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.633 09:35:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:00.633 00:24:00.633 real 0m21.297s 00:24:00.633 user 0m26.607s 00:24:00.633 sys 0m5.729s 00:24:00.633 09:35:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.633 09:35:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.633 ************************************ 00:24:00.633 END TEST nvmf_discovery_remove_ifc 00:24:00.633 ************************************ 00:24:00.633 09:35:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:00.633 09:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:00.633 09:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.633 09:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.633 ************************************ 00:24:00.633 START TEST nvmf_identify_kernel_target 00:24:00.633 ************************************ 00:24:00.633 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:00.633 * Looking for test storage... 00:24:00.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.633 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:00.633 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:00.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.634 --rc genhtml_branch_coverage=1 00:24:00.634 --rc genhtml_function_coverage=1 00:24:00.634 --rc genhtml_legend=1 00:24:00.634 --rc geninfo_all_blocks=1 00:24:00.634 --rc geninfo_unexecuted_blocks=1 00:24:00.634 00:24:00.634 ' 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:00.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.634 --rc genhtml_branch_coverage=1 00:24:00.634 --rc genhtml_function_coverage=1 00:24:00.634 --rc genhtml_legend=1 00:24:00.634 --rc geninfo_all_blocks=1 00:24:00.634 --rc geninfo_unexecuted_blocks=1 00:24:00.634 00:24:00.634 ' 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:00.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.634 --rc genhtml_branch_coverage=1 00:24:00.634 --rc genhtml_function_coverage=1 00:24:00.634 --rc genhtml_legend=1 00:24:00.634 --rc geninfo_all_blocks=1 00:24:00.634 --rc geninfo_unexecuted_blocks=1 00:24:00.634 00:24:00.634 ' 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:00.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.634 --rc genhtml_branch_coverage=1 00:24:00.634 --rc genhtml_function_coverage=1 00:24:00.634 --rc genhtml_legend=1 00:24:00.634 --rc geninfo_all_blocks=1 00:24:00.634 --rc geninfo_unexecuted_blocks=1 00:24:00.634 00:24:00.634 ' 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:00.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:00.634 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:00.635 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.635 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:00.635 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:00.635 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:00.635 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.635 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.635 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.635 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:00.635 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:00.635 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:00.635 09:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:05.893 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:05.893 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:05.893 Found net devices under 0000:af:00.0: cvl_0_0 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:05.893 Found net devices under 0000:af:00.1: cvl_0_1 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.893 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.894 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.894 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.894 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:05.894 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.894 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.894 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:05.894 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:05.894 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.894 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.894 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:05.894 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:05.894 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.894 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.894 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.151 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.151 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:06.151 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.151 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.151 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:06.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:24:06.152 00:24:06.152 --- 10.0.0.2 ping statistics --- 00:24:06.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.152 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:24:06.152 00:24:06.152 --- 10.0.0.1 ping statistics --- 00:24:06.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.152 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:06.152 09:35:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:08.677 Waiting for block devices as requested 00:24:08.677 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:08.934 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:08.934 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:08.934 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:09.191 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:09.191 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:09.191 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:09.191 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:09.449 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:09.449 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:09.449 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:09.706 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:09.706 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:09.706 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:09.706 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:09.963 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:09.963 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:09.963 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:09.963 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:09.963 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:09.963 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:09.963 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:09.963 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:09.963 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:09.963 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:09.963 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:10.221 No valid GPT data, bailing 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:10.221 00:24:10.221 Discovery Log Number of Records 2, Generation counter 2 00:24:10.221 =====Discovery Log Entry 0====== 00:24:10.221 trtype: tcp 00:24:10.221 adrfam: ipv4 00:24:10.221 subtype: current discovery subsystem 00:24:10.221 treq: not specified, sq flow control disable supported 00:24:10.221 portid: 1 00:24:10.221 trsvcid: 4420 00:24:10.221 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:10.221 traddr: 10.0.0.1 00:24:10.221 eflags: none 00:24:10.221 sectype: none 00:24:10.221 =====Discovery Log Entry 1====== 00:24:10.221 trtype: tcp 00:24:10.221 adrfam: ipv4 00:24:10.221 subtype: nvme subsystem 00:24:10.221 treq: not specified, sq flow control disable supported 00:24:10.221 portid: 1 00:24:10.221 trsvcid: 4420 00:24:10.221 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:10.221 traddr: 10.0.0.1 00:24:10.221 eflags: none 00:24:10.221 sectype: none 00:24:10.221 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:10.221 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:10.221 ===================================================== 00:24:10.221 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:10.221 ===================================================== 00:24:10.221 Controller Capabilities/Features 00:24:10.221 ================================ 00:24:10.221 Vendor ID: 0000 00:24:10.221 Subsystem Vendor ID: 0000 00:24:10.221 Serial Number: cb693b9614bcac8967f4 00:24:10.221 Model Number: Linux 00:24:10.221 Firmware Version: 6.8.9-20 00:24:10.221 Recommended Arb Burst: 0 00:24:10.221 IEEE OUI Identifier: 00 00 00 00:24:10.221 Multi-path I/O 00:24:10.221 May have multiple subsystem ports: No 00:24:10.221 May have multiple controllers: No 00:24:10.221 Associated with SR-IOV VF: No 00:24:10.221 Max Data Transfer Size: Unlimited 00:24:10.221 Max Number of Namespaces: 0 00:24:10.221 Max Number of I/O Queues: 1024 00:24:10.222 NVMe Specification Version (VS): 1.3 00:24:10.222 NVMe Specification Version (Identify): 1.3 00:24:10.222 Maximum Queue Entries: 1024 00:24:10.222 Contiguous Queues Required: No 00:24:10.222 Arbitration Mechanisms Supported 00:24:10.222 Weighted Round Robin: Not Supported 00:24:10.222 Vendor Specific: Not Supported 00:24:10.222 Reset Timeout: 7500 ms 00:24:10.222 Doorbell Stride: 4 bytes 00:24:10.222 NVM Subsystem Reset: Not Supported 00:24:10.222 Command Sets Supported 00:24:10.222 NVM Command Set: Supported 00:24:10.222 Boot Partition: Not Supported 00:24:10.222 Memory Page Size Minimum: 4096 bytes 00:24:10.222 Memory Page Size Maximum: 4096 bytes 00:24:10.222 Persistent Memory Region: Not Supported 00:24:10.222 Optional Asynchronous Events Supported 00:24:10.222 Namespace Attribute Notices: Not Supported 00:24:10.222 Firmware Activation Notices: Not Supported 00:24:10.222 ANA Change Notices: Not Supported 00:24:10.222 PLE Aggregate Log Change Notices: Not Supported 00:24:10.222 LBA Status Info Alert Notices: Not Supported 00:24:10.222 EGE Aggregate Log Change Notices: Not Supported 00:24:10.222 Normal NVM Subsystem Shutdown event: Not Supported 00:24:10.222 Zone Descriptor Change Notices: Not Supported 00:24:10.222 Discovery Log Change Notices: Supported 00:24:10.222 Controller Attributes 00:24:10.222 128-bit Host Identifier: Not Supported 00:24:10.222 Non-Operational Permissive Mode: Not Supported 00:24:10.222 NVM Sets: Not Supported 00:24:10.222 Read Recovery Levels: Not Supported 00:24:10.222 Endurance Groups: Not Supported 00:24:10.222 Predictable Latency Mode: Not Supported 00:24:10.222 Traffic Based Keep ALive: Not Supported 00:24:10.222 Namespace Granularity: Not Supported 00:24:10.222 SQ Associations: Not Supported 00:24:10.222 UUID List: Not Supported 00:24:10.222 Multi-Domain Subsystem: Not Supported 00:24:10.222 Fixed Capacity Management: Not Supported 00:24:10.222 Variable Capacity Management: Not Supported 00:24:10.222 Delete Endurance Group: Not Supported 00:24:10.222 Delete NVM Set: Not Supported 00:24:10.222 Extended LBA Formats Supported: Not Supported 00:24:10.222 Flexible Data Placement Supported: Not Supported 00:24:10.222 00:24:10.222 Controller Memory Buffer Support 00:24:10.222 ================================ 00:24:10.222 Supported: No 00:24:10.222 00:24:10.222 Persistent Memory Region Support 00:24:10.222 ================================ 00:24:10.222 Supported: No 00:24:10.222 00:24:10.222 Admin Command Set Attributes 00:24:10.222 ============================ 00:24:10.222 Security Send/Receive: Not Supported 00:24:10.222 Format NVM: Not Supported 00:24:10.222 Firmware Activate/Download: Not Supported 00:24:10.222 Namespace Management: Not Supported 00:24:10.222 Device Self-Test: Not Supported 00:24:10.222 Directives: Not Supported 00:24:10.222 NVMe-MI: Not Supported 00:24:10.222 Virtualization Management: Not Supported 00:24:10.222 Doorbell Buffer Config: Not Supported 00:24:10.222 Get LBA Status Capability: Not Supported 00:24:10.222 Command & Feature Lockdown Capability: Not Supported 00:24:10.222 Abort Command Limit: 1 00:24:10.222 Async Event Request Limit: 1 00:24:10.222 Number of Firmware Slots: N/A 00:24:10.222 Firmware Slot 1 Read-Only: N/A 00:24:10.222 Firmware Activation Without Reset: N/A 00:24:10.222 Multiple Update Detection Support: N/A 00:24:10.222 Firmware Update Granularity: No Information Provided 00:24:10.222 Per-Namespace SMART Log: No 00:24:10.222 Asymmetric Namespace Access Log Page: Not Supported 00:24:10.222 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:10.222 Command Effects Log Page: Not Supported 00:24:10.222 Get Log Page Extended Data: Supported 00:24:10.222 Telemetry Log Pages: Not Supported 00:24:10.222 Persistent Event Log Pages: Not Supported 00:24:10.222 Supported Log Pages Log Page: May Support 00:24:10.222 Commands Supported & Effects Log Page: Not Supported 00:24:10.222 Feature Identifiers & Effects Log Page:May Support 00:24:10.222 NVMe-MI Commands & Effects Log Page: May Support 00:24:10.222 Data Area 4 for Telemetry Log: Not Supported 00:24:10.222 Error Log Page Entries Supported: 1 00:24:10.222 Keep Alive: Not Supported 00:24:10.222 00:24:10.222 NVM Command Set Attributes 00:24:10.222 ========================== 00:24:10.222 Submission Queue Entry Size 00:24:10.222 Max: 1 00:24:10.222 Min: 1 00:24:10.222 Completion Queue Entry Size 00:24:10.222 Max: 1 00:24:10.222 Min: 1 00:24:10.222 Number of Namespaces: 0 00:24:10.222 Compare Command: Not Supported 00:24:10.222 Write Uncorrectable Command: Not Supported 00:24:10.222 Dataset Management Command: Not Supported 00:24:10.222 Write Zeroes Command: Not Supported 00:24:10.222 Set Features Save Field: Not Supported 00:24:10.222 Reservations: Not Supported 00:24:10.222 Timestamp: Not Supported 00:24:10.222 Copy: Not Supported 00:24:10.222 Volatile Write Cache: Not Present 00:24:10.222 Atomic Write Unit (Normal): 1 00:24:10.222 Atomic Write Unit (PFail): 1 00:24:10.222 Atomic Compare & Write Unit: 1 00:24:10.222 Fused Compare & Write: Not Supported 00:24:10.222 Scatter-Gather List 00:24:10.222 SGL Command Set: Supported 00:24:10.222 SGL Keyed: Not Supported 00:24:10.222 SGL Bit Bucket Descriptor: Not Supported 00:24:10.222 SGL Metadata Pointer: Not Supported 00:24:10.222 Oversized SGL: Not Supported 00:24:10.222 SGL Metadata Address: Not Supported 00:24:10.222 SGL Offset: Supported 00:24:10.222 Transport SGL Data Block: Not Supported 00:24:10.222 Replay Protected Memory Block: Not Supported 00:24:10.222 00:24:10.222 Firmware Slot Information 00:24:10.222 ========================= 00:24:10.222 Active slot: 0 00:24:10.222 00:24:10.222 00:24:10.222 Error Log 00:24:10.222 ========= 00:24:10.222 00:24:10.222 Active Namespaces 00:24:10.222 ================= 00:24:10.222 Discovery Log Page 00:24:10.222 ================== 00:24:10.222 Generation Counter: 2 00:24:10.222 Number of Records: 2 00:24:10.222 Record Format: 0 00:24:10.222 00:24:10.222 Discovery Log Entry 0 00:24:10.222 ---------------------- 00:24:10.222 Transport Type: 3 (TCP) 00:24:10.222 Address Family: 1 (IPv4) 00:24:10.222 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:10.222 Entry Flags: 00:24:10.222 Duplicate Returned Information: 0 00:24:10.222 Explicit Persistent Connection Support for Discovery: 0 00:24:10.222 Transport Requirements: 00:24:10.222 Secure Channel: Not Specified 00:24:10.222 Port ID: 1 (0x0001) 00:24:10.222 Controller ID: 65535 (0xffff) 00:24:10.222 Admin Max SQ Size: 32 00:24:10.222 Transport Service Identifier: 4420 00:24:10.222 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:10.222 Transport Address: 10.0.0.1 00:24:10.222 Discovery Log Entry 1 00:24:10.222 ---------------------- 00:24:10.222 Transport Type: 3 (TCP) 00:24:10.222 Address Family: 1 (IPv4) 00:24:10.222 Subsystem Type: 2 (NVM Subsystem) 00:24:10.222 Entry Flags: 00:24:10.222 Duplicate Returned Information: 0 00:24:10.222 Explicit Persistent Connection Support for Discovery: 0 00:24:10.222 Transport Requirements: 00:24:10.222 Secure Channel: Not Specified 00:24:10.222 Port ID: 1 (0x0001) 00:24:10.222 Controller ID: 65535 (0xffff) 00:24:10.222 Admin Max SQ Size: 32 00:24:10.222 Transport Service Identifier: 4420 00:24:10.222 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:10.222 Transport Address: 10.0.0.1 00:24:10.222 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:10.480 get_feature(0x01) failed 00:24:10.480 get_feature(0x02) failed 00:24:10.480 get_feature(0x04) failed 00:24:10.480 ===================================================== 00:24:10.480 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:10.480 ===================================================== 00:24:10.480 Controller Capabilities/Features 00:24:10.480 ================================ 00:24:10.480 Vendor ID: 0000 00:24:10.480 Subsystem Vendor ID: 0000 00:24:10.480 Serial Number: 3e1671f9c101ce9351f0 00:24:10.480 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:10.480 Firmware Version: 6.8.9-20 00:24:10.480 Recommended Arb Burst: 6 00:24:10.480 IEEE OUI Identifier: 00 00 00 00:24:10.480 Multi-path I/O 00:24:10.480 May have multiple subsystem ports: Yes 00:24:10.480 May have multiple controllers: Yes 00:24:10.480 Associated with SR-IOV VF: No 00:24:10.480 Max Data Transfer Size: Unlimited 00:24:10.480 Max Number of Namespaces: 1024 00:24:10.481 Max Number of I/O Queues: 128 00:24:10.481 NVMe Specification Version (VS): 1.3 00:24:10.481 NVMe Specification Version (Identify): 1.3 00:24:10.481 Maximum Queue Entries: 1024 00:24:10.481 Contiguous Queues Required: No 00:24:10.481 Arbitration Mechanisms Supported 00:24:10.481 Weighted Round Robin: Not Supported 00:24:10.481 Vendor Specific: Not Supported 00:24:10.481 Reset Timeout: 7500 ms 00:24:10.481 Doorbell Stride: 4 bytes 00:24:10.481 NVM Subsystem Reset: Not Supported 00:24:10.481 Command Sets Supported 00:24:10.481 NVM Command Set: Supported 00:24:10.481 Boot Partition: Not Supported 00:24:10.481 Memory Page Size Minimum: 4096 bytes 00:24:10.481 Memory Page Size Maximum: 4096 bytes 00:24:10.481 Persistent Memory Region: Not Supported 00:24:10.481 Optional Asynchronous Events Supported 00:24:10.481 Namespace Attribute Notices: Supported 00:24:10.481 Firmware Activation Notices: Not Supported 00:24:10.481 ANA Change Notices: Supported 00:24:10.481 PLE Aggregate Log Change Notices: Not Supported 00:24:10.481 LBA Status Info Alert Notices: Not Supported 00:24:10.481 EGE Aggregate Log Change Notices: Not Supported 00:24:10.481 Normal NVM Subsystem Shutdown event: Not Supported 00:24:10.481 Zone Descriptor Change Notices: Not Supported 00:24:10.481 Discovery Log Change Notices: Not Supported 00:24:10.481 Controller Attributes 00:24:10.481 128-bit Host Identifier: Supported 00:24:10.481 Non-Operational Permissive Mode: Not Supported 00:24:10.481 NVM Sets: Not Supported 00:24:10.481 Read Recovery Levels: Not Supported 00:24:10.481 Endurance Groups: Not Supported 00:24:10.481 Predictable Latency Mode: Not Supported 00:24:10.481 Traffic Based Keep ALive: Supported 00:24:10.481 Namespace Granularity: Not Supported 00:24:10.481 SQ Associations: Not Supported 00:24:10.481 UUID List: Not Supported 00:24:10.481 Multi-Domain Subsystem: Not Supported 00:24:10.481 Fixed Capacity Management: Not Supported 00:24:10.481 Variable Capacity Management: Not Supported 00:24:10.481 Delete Endurance Group: Not Supported 00:24:10.481 Delete NVM Set: Not Supported 00:24:10.481 Extended LBA Formats Supported: Not Supported 00:24:10.481 Flexible Data Placement Supported: Not Supported 00:24:10.481 00:24:10.481 Controller Memory Buffer Support 00:24:10.481 ================================ 00:24:10.481 Supported: No 00:24:10.481 00:24:10.481 Persistent Memory Region Support 00:24:10.481 ================================ 00:24:10.481 Supported: No 00:24:10.481 00:24:10.481 Admin Command Set Attributes 00:24:10.481 ============================ 00:24:10.481 Security Send/Receive: Not Supported 00:24:10.481 Format NVM: Not Supported 00:24:10.481 Firmware Activate/Download: Not Supported 00:24:10.481 Namespace Management: Not Supported 00:24:10.481 Device Self-Test: Not Supported 00:24:10.481 Directives: Not Supported 00:24:10.481 NVMe-MI: Not Supported 00:24:10.481 Virtualization Management: Not Supported 00:24:10.481 Doorbell Buffer Config: Not Supported 00:24:10.481 Get LBA Status Capability: Not Supported 00:24:10.481 Command & Feature Lockdown Capability: Not Supported 00:24:10.481 Abort Command Limit: 4 00:24:10.481 Async Event Request Limit: 4 00:24:10.481 Number of Firmware Slots: N/A 00:24:10.481 Firmware Slot 1 Read-Only: N/A 00:24:10.481 Firmware Activation Without Reset: N/A 00:24:10.481 Multiple Update Detection Support: N/A 00:24:10.481 Firmware Update Granularity: No Information Provided 00:24:10.481 Per-Namespace SMART Log: Yes 00:24:10.481 Asymmetric Namespace Access Log Page: Supported 00:24:10.481 ANA Transition Time : 10 sec 00:24:10.481 00:24:10.481 Asymmetric Namespace Access Capabilities 00:24:10.481 ANA Optimized State : Supported 00:24:10.481 ANA Non-Optimized State : Supported 00:24:10.481 ANA Inaccessible State : Supported 00:24:10.481 ANA Persistent Loss State : Supported 00:24:10.481 ANA Change State : Supported 00:24:10.481 ANAGRPID is not changed : No 00:24:10.481 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:10.481 00:24:10.481 ANA Group Identifier Maximum : 128 00:24:10.481 Number of ANA Group Identifiers : 128 00:24:10.481 Max Number of Allowed Namespaces : 1024 00:24:10.481 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:10.481 Command Effects Log Page: Supported 00:24:10.481 Get Log Page Extended Data: Supported 00:24:10.481 Telemetry Log Pages: Not Supported 00:24:10.481 Persistent Event Log Pages: Not Supported 00:24:10.481 Supported Log Pages Log Page: May Support 00:24:10.481 Commands Supported & Effects Log Page: Not Supported 00:24:10.481 Feature Identifiers & Effects Log Page:May Support 00:24:10.481 NVMe-MI Commands & Effects Log Page: May Support 00:24:10.481 Data Area 4 for Telemetry Log: Not Supported 00:24:10.481 Error Log Page Entries Supported: 128 00:24:10.481 Keep Alive: Supported 00:24:10.481 Keep Alive Granularity: 1000 ms 00:24:10.481 00:24:10.481 NVM Command Set Attributes 00:24:10.481 ========================== 00:24:10.481 Submission Queue Entry Size 00:24:10.481 Max: 64 00:24:10.481 Min: 64 00:24:10.481 Completion Queue Entry Size 00:24:10.481 Max: 16 00:24:10.481 Min: 16 00:24:10.481 Number of Namespaces: 1024 00:24:10.481 Compare Command: Not Supported 00:24:10.481 Write Uncorrectable Command: Not Supported 00:24:10.481 Dataset Management Command: Supported 00:24:10.481 Write Zeroes Command: Supported 00:24:10.481 Set Features Save Field: Not Supported 00:24:10.481 Reservations: Not Supported 00:24:10.481 Timestamp: Not Supported 00:24:10.481 Copy: Not Supported 00:24:10.481 Volatile Write Cache: Present 00:24:10.481 Atomic Write Unit (Normal): 1 00:24:10.481 Atomic Write Unit (PFail): 1 00:24:10.481 Atomic Compare & Write Unit: 1 00:24:10.481 Fused Compare & Write: Not Supported 00:24:10.481 Scatter-Gather List 00:24:10.481 SGL Command Set: Supported 00:24:10.481 SGL Keyed: Not Supported 00:24:10.481 SGL Bit Bucket Descriptor: Not Supported 00:24:10.481 SGL Metadata Pointer: Not Supported 00:24:10.481 Oversized SGL: Not Supported 00:24:10.481 SGL Metadata Address: Not Supported 00:24:10.481 SGL Offset: Supported 00:24:10.481 Transport SGL Data Block: Not Supported 00:24:10.481 Replay Protected Memory Block: Not Supported 00:24:10.481 00:24:10.481 Firmware Slot Information 00:24:10.481 ========================= 00:24:10.481 Active slot: 0 00:24:10.481 00:24:10.481 Asymmetric Namespace Access 00:24:10.481 =========================== 00:24:10.481 Change Count : 0 00:24:10.481 Number of ANA Group Descriptors : 1 00:24:10.481 ANA Group Descriptor : 0 00:24:10.481 ANA Group ID : 1 00:24:10.481 Number of NSID Values : 1 00:24:10.481 Change Count : 0 00:24:10.481 ANA State : 1 00:24:10.481 Namespace Identifier : 1 00:24:10.481 00:24:10.481 Commands Supported and Effects 00:24:10.481 ============================== 00:24:10.481 Admin Commands 00:24:10.481 -------------- 00:24:10.481 Get Log Page (02h): Supported 00:24:10.481 Identify (06h): Supported 00:24:10.481 Abort (08h): Supported 00:24:10.481 Set Features (09h): Supported 00:24:10.481 Get Features (0Ah): Supported 00:24:10.481 Asynchronous Event Request (0Ch): Supported 00:24:10.481 Keep Alive (18h): Supported 00:24:10.481 I/O Commands 00:24:10.481 ------------ 00:24:10.481 Flush (00h): Supported 00:24:10.481 Write (01h): Supported LBA-Change 00:24:10.481 Read (02h): Supported 00:24:10.481 Write Zeroes (08h): Supported LBA-Change 00:24:10.481 Dataset Management (09h): Supported 00:24:10.481 00:24:10.481 Error Log 00:24:10.481 ========= 00:24:10.481 Entry: 0 00:24:10.481 Error Count: 0x3 00:24:10.481 Submission Queue Id: 0x0 00:24:10.481 Command Id: 0x5 00:24:10.481 Phase Bit: 0 00:24:10.481 Status Code: 0x2 00:24:10.481 Status Code Type: 0x0 00:24:10.481 Do Not Retry: 1 00:24:10.481 Error Location: 0x28 00:24:10.481 LBA: 0x0 00:24:10.481 Namespace: 0x0 00:24:10.481 Vendor Log Page: 0x0 00:24:10.481 ----------- 00:24:10.481 Entry: 1 00:24:10.481 Error Count: 0x2 00:24:10.481 Submission Queue Id: 0x0 00:24:10.481 Command Id: 0x5 00:24:10.481 Phase Bit: 0 00:24:10.481 Status Code: 0x2 00:24:10.481 Status Code Type: 0x0 00:24:10.481 Do Not Retry: 1 00:24:10.481 Error Location: 0x28 00:24:10.481 LBA: 0x0 00:24:10.481 Namespace: 0x0 00:24:10.481 Vendor Log Page: 0x0 00:24:10.481 ----------- 00:24:10.481 Entry: 2 00:24:10.481 Error Count: 0x1 00:24:10.481 Submission Queue Id: 0x0 00:24:10.481 Command Id: 0x4 00:24:10.481 Phase Bit: 0 00:24:10.481 Status Code: 0x2 00:24:10.481 Status Code Type: 0x0 00:24:10.481 Do Not Retry: 1 00:24:10.481 Error Location: 0x28 00:24:10.481 LBA: 0x0 00:24:10.481 Namespace: 0x0 00:24:10.481 Vendor Log Page: 0x0 00:24:10.481 00:24:10.481 Number of Queues 00:24:10.481 ================ 00:24:10.481 Number of I/O Submission Queues: 128 00:24:10.481 Number of I/O Completion Queues: 128 00:24:10.481 00:24:10.482 ZNS Specific Controller Data 00:24:10.482 ============================ 00:24:10.482 Zone Append Size Limit: 0 00:24:10.482 00:24:10.482 00:24:10.482 Active Namespaces 00:24:10.482 ================= 00:24:10.482 get_feature(0x05) failed 00:24:10.482 Namespace ID:1 00:24:10.482 Command Set Identifier: NVM (00h) 00:24:10.482 Deallocate: Supported 00:24:10.482 Deallocated/Unwritten Error: Not Supported 00:24:10.482 Deallocated Read Value: Unknown 00:24:10.482 Deallocate in Write Zeroes: Not Supported 00:24:10.482 Deallocated Guard Field: 0xFFFF 00:24:10.482 Flush: Supported 00:24:10.482 Reservation: Not Supported 00:24:10.482 Namespace Sharing Capabilities: Multiple Controllers 00:24:10.482 Size (in LBAs): 1953525168 (931GiB) 00:24:10.482 Capacity (in LBAs): 1953525168 (931GiB) 00:24:10.482 Utilization (in LBAs): 1953525168 (931GiB) 00:24:10.482 UUID: 975746b3-360e-4a71-a365-89abe9b9dc24 00:24:10.482 Thin Provisioning: Not Supported 00:24:10.482 Per-NS Atomic Units: Yes 00:24:10.482 Atomic Boundary Size (Normal): 0 00:24:10.482 Atomic Boundary Size (PFail): 0 00:24:10.482 Atomic Boundary Offset: 0 00:24:10.482 NGUID/EUI64 Never Reused: No 00:24:10.482 ANA group ID: 1 00:24:10.482 Namespace Write Protected: No 00:24:10.482 Number of LBA Formats: 1 00:24:10.482 Current LBA Format: LBA Format #00 00:24:10.482 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:10.482 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:10.482 rmmod nvme_tcp 00:24:10.482 rmmod nvme_fabrics 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.482 09:35:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.010 09:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:13.010 09:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:13.010 09:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:13.010 09:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:13.010 09:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:13.010 09:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:13.010 09:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:13.010 09:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:13.010 09:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:13.010 09:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:13.010 09:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:14.906 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:14.906 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:14.906 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:14.906 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:14.906 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:14.906 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:15.164 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:15.164 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:15.164 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:15.164 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:15.164 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:15.164 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:15.164 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:15.164 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:15.164 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:15.164 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:16.098 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:16.098 00:24:16.098 real 0m15.718s 00:24:16.098 user 0m3.962s 00:24:16.098 sys 0m8.091s 00:24:16.098 09:35:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.098 09:35:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.098 ************************************ 00:24:16.098 END TEST nvmf_identify_kernel_target 00:24:16.098 ************************************ 00:24:16.098 09:35:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:16.098 09:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.099 09:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.099 09:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.099 ************************************ 00:24:16.099 START TEST nvmf_auth_host 00:24:16.099 ************************************ 00:24:16.099 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:16.357 * Looking for test storage... 00:24:16.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:16.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.357 --rc genhtml_branch_coverage=1 00:24:16.357 --rc genhtml_function_coverage=1 00:24:16.357 --rc genhtml_legend=1 00:24:16.357 --rc geninfo_all_blocks=1 00:24:16.357 --rc geninfo_unexecuted_blocks=1 00:24:16.357 00:24:16.357 ' 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:16.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.357 --rc genhtml_branch_coverage=1 00:24:16.357 --rc genhtml_function_coverage=1 00:24:16.357 --rc genhtml_legend=1 00:24:16.357 --rc geninfo_all_blocks=1 00:24:16.357 --rc geninfo_unexecuted_blocks=1 00:24:16.357 00:24:16.357 ' 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:16.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.357 --rc genhtml_branch_coverage=1 00:24:16.357 --rc genhtml_function_coverage=1 00:24:16.357 --rc genhtml_legend=1 00:24:16.357 --rc geninfo_all_blocks=1 00:24:16.357 --rc geninfo_unexecuted_blocks=1 00:24:16.357 00:24:16.357 ' 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:16.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.357 --rc genhtml_branch_coverage=1 00:24:16.357 --rc genhtml_function_coverage=1 00:24:16.357 --rc genhtml_legend=1 00:24:16.357 --rc geninfo_all_blocks=1 00:24:16.357 --rc geninfo_unexecuted_blocks=1 00:24:16.357 00:24:16.357 ' 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:16.357 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:16.358 09:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.622 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.622 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:21.622 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:21.622 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:21.622 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:21.623 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:21.623 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:21.623 Found net devices under 0000:af:00.0: cvl_0_0 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:21.623 Found net devices under 0000:af:00.1: cvl_0_1 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.623 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.881 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.881 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:21.881 09:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:21.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:24:21.881 00:24:21.881 --- 10.0.0.2 ping statistics --- 00:24:21.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.881 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:24:21.881 00:24:21.881 --- 10.0.0.1 ping statistics --- 00:24:21.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.881 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3452095 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3452095 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3452095 ']' 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.881 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=946a4b2a9519f108ed6670c184ea86e1 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.8LF 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 946a4b2a9519f108ed6670c184ea86e1 0 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 946a4b2a9519f108ed6670c184ea86e1 0 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=946a4b2a9519f108ed6670c184ea86e1 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.8LF 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.8LF 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.8LF 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=52782d4a8d48b8c9a5b5d045c02a125e4999bb7324d7476dc78ec0a09b92f062 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.t8X 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 52782d4a8d48b8c9a5b5d045c02a125e4999bb7324d7476dc78ec0a09b92f062 3 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 52782d4a8d48b8c9a5b5d045c02a125e4999bb7324d7476dc78ec0a09b92f062 3 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=52782d4a8d48b8c9a5b5d045c02a125e4999bb7324d7476dc78ec0a09b92f062 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.t8X 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.t8X 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.t8X 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ac2896f717cc69b802f0b991d0df27cd01ede48f1f4d3c38 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DhR 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ac2896f717cc69b802f0b991d0df27cd01ede48f1f4d3c38 0 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ac2896f717cc69b802f0b991d0df27cd01ede48f1f4d3c38 0 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:22.139 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:22.140 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ac2896f717cc69b802f0b991d0df27cd01ede48f1f4d3c38 00:24:22.140 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:22.140 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DhR 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DhR 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.DhR 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=721ec1b85dcb14c22b85c6c91a4759fadc13140c85ccb52b 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6ye 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 721ec1b85dcb14c22b85c6c91a4759fadc13140c85ccb52b 2 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 721ec1b85dcb14c22b85c6c91a4759fadc13140c85ccb52b 2 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=721ec1b85dcb14c22b85c6c91a4759fadc13140c85ccb52b 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6ye 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6ye 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.6ye 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=944944db554a313134d882362d9131d3 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xBS 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 944944db554a313134d882362d9131d3 1 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 944944db554a313134d882362d9131d3 1 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=944944db554a313134d882362d9131d3 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xBS 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xBS 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.xBS 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6d972e8b70d4ee855d1e8f91f2ef843b 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.X9g 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6d972e8b70d4ee855d1e8f91f2ef843b 1 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6d972e8b70d4ee855d1e8f91f2ef843b 1 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6d972e8b70d4ee855d1e8f91f2ef843b 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.X9g 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.X9g 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.X9g 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=73c00492d0fef8303f400a840d3905e50487aa88babf1085 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.gs6 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 73c00492d0fef8303f400a840d3905e50487aa88babf1085 2 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 73c00492d0fef8303f400a840d3905e50487aa88babf1085 2 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=73c00492d0fef8303f400a840d3905e50487aa88babf1085 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.gs6 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.gs6 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.gs6 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=50ad8dbca5521efb6ea8fae94534b78d 00:24:22.398 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:22.656 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.w6v 00:24:22.656 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 50ad8dbca5521efb6ea8fae94534b78d 0 00:24:22.656 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 50ad8dbca5521efb6ea8fae94534b78d 0 00:24:22.656 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:22.656 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:22.656 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=50ad8dbca5521efb6ea8fae94534b78d 00:24:22.656 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:22.656 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:22.656 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.w6v 00:24:22.656 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.w6v 00:24:22.656 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.w6v 00:24:22.656 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:22.656 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:22.656 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:22.656 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8c82cf6a9df3908ee6fe4eb8c359d91655c3b9a51be6bbc3e4985b3a75099854 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.iTZ 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8c82cf6a9df3908ee6fe4eb8c359d91655c3b9a51be6bbc3e4985b3a75099854 3 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8c82cf6a9df3908ee6fe4eb8c359d91655c3b9a51be6bbc3e4985b3a75099854 3 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8c82cf6a9df3908ee6fe4eb8c359d91655c3b9a51be6bbc3e4985b3a75099854 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.iTZ 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.iTZ 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.iTZ 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3452095 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3452095 ']' 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.657 09:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8LF 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.t8X ]] 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.t8X 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.DhR 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.6ye ]] 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6ye 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.xBS 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.X9g ]] 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.X9g 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.913 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.gs6 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.w6v ]] 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.w6v 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.iTZ 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:22.914 09:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:25.437 Waiting for block devices as requested 00:24:25.437 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:25.695 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:25.695 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:25.695 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:25.952 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:25.952 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:25.952 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:26.210 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:26.210 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:26.210 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:26.210 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:26.467 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:26.467 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:26.467 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:26.725 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:26.725 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:26.725 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:27.291 No valid GPT data, bailing 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:27.291 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:27.549 00:24:27.549 Discovery Log Number of Records 2, Generation counter 2 00:24:27.549 =====Discovery Log Entry 0====== 00:24:27.549 trtype: tcp 00:24:27.549 adrfam: ipv4 00:24:27.549 subtype: current discovery subsystem 00:24:27.549 treq: not specified, sq flow control disable supported 00:24:27.549 portid: 1 00:24:27.549 trsvcid: 4420 00:24:27.549 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:27.549 traddr: 10.0.0.1 00:24:27.549 eflags: none 00:24:27.549 sectype: none 00:24:27.549 =====Discovery Log Entry 1====== 00:24:27.549 trtype: tcp 00:24:27.549 adrfam: ipv4 00:24:27.549 subtype: nvme subsystem 00:24:27.549 treq: not specified, sq flow control disable supported 00:24:27.549 portid: 1 00:24:27.549 trsvcid: 4420 00:24:27.549 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:27.549 traddr: 10.0.0.1 00:24:27.549 eflags: none 00:24:27.549 sectype: none 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.549 nvme0n1 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: ]] 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.549 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.807 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.807 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.807 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.807 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.807 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.807 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.807 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.807 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.807 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.807 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.807 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.807 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:27.807 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.807 09:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.807 nvme0n1 00:24:27.807 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.807 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.807 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.807 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.807 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.807 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.807 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.807 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.807 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.808 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.065 nvme0n1 00:24:28.065 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.065 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.065 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.066 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.323 nvme0n1 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:28.323 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: ]] 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.324 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.582 nvme0n1 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.582 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.840 nvme0n1 00:24:28.840 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.840 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.840 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.840 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.840 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.840 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.840 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.840 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.840 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.840 09:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: ]] 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.840 nvme0n1 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.840 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.098 nvme0n1 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.098 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.099 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.099 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.099 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.099 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.356 nvme0n1 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.356 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: ]] 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.614 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.615 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.615 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.615 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.615 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.615 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.615 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:29.615 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.615 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.615 nvme0n1 00:24:29.615 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.615 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.615 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.615 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.615 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.615 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.872 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.872 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.872 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.872 09:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.872 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.873 nvme0n1 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.873 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: ]] 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.131 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.389 nvme0n1 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.389 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.647 nvme0n1 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.647 09:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.906 nvme0n1 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: ]] 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.906 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.164 nvme0n1 00:24:31.164 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.164 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.164 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.164 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.164 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.164 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.422 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.680 nvme0n1 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: ]] 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.680 09:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.937 nvme0n1 00:24:31.938 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.938 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.938 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.938 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.938 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.938 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.938 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.938 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.196 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.454 nvme0n1 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.454 09:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.019 nvme0n1 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: ]] 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:33.019 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.020 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.276 nvme0n1 00:24:33.276 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.276 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.276 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.276 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.276 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.276 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.276 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.276 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.276 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.276 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.534 09:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.792 nvme0n1 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:33.792 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: ]] 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.793 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.358 nvme0n1 00:24:34.358 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.358 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.358 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.358 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.358 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.358 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.358 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.358 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.358 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.358 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:34.615 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.616 09:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.182 nvme0n1 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.182 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.747 nvme0n1 00:24:35.747 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.747 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.747 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.747 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.747 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.747 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.747 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.747 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.747 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.747 09:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: ]] 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.747 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.748 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.748 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.748 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.748 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.748 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.748 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:35.748 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.748 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.313 nvme0n1 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.313 09:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.878 nvme0n1 00:24:36.878 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.878 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.878 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.878 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.878 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: ]] 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.136 nvme0n1 00:24:37.136 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.137 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.137 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.137 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.137 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.137 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.137 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.137 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.137 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.137 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.395 nvme0n1 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.395 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.654 nvme0n1 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: ]] 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.654 09:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.916 nvme0n1 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.916 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.240 nvme0n1 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: ]] 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.240 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.549 nvme0n1 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.549 nvme0n1 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.549 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.815 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.816 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.816 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.816 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.816 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.816 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.816 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.816 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.816 09:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.816 nvme0n1 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: ]] 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.816 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.073 nvme0n1 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.073 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.331 nvme0n1 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: ]] 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.331 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.589 nvme0n1 00:24:39.589 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.589 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.589 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.589 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.589 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.589 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.847 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.847 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.847 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.847 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.847 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.847 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.847 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:39.847 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.847 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.847 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:39.847 09:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.847 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.106 nvme0n1 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.106 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.364 nvme0n1 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: ]] 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.364 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.365 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.623 nvme0n1 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.623 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.881 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.881 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.881 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.881 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.881 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.881 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.881 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.881 09:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:40.881 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.881 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:40.881 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:40.881 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:40.881 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:40.881 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.881 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.139 nvme0n1 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: ]] 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:41.139 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.140 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.398 nvme0n1 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.398 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.656 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.656 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.656 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.656 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.656 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.656 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.656 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.656 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:41.656 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.656 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:41.656 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:41.656 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:41.656 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:41.656 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.656 09:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.914 nvme0n1 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.914 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.481 nvme0n1 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: ]] 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.481 09:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.739 nvme0n1 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.739 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.302 nvme0n1 00:24:43.302 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: ]] 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.303 09:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.867 nvme0n1 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.867 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.868 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.432 nvme0n1 00:24:44.432 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.432 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.432 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.432 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.432 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.432 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.432 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.432 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.432 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.432 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:44.689 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.690 09:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.255 nvme0n1 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: ]] 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.255 09:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.820 nvme0n1 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.820 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.385 nvme0n1 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: ]] 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.385 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.643 nvme0n1 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.643 09:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.901 nvme0n1 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.901 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.902 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.902 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.902 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.159 nvme0n1 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: ]] 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.159 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.160 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.160 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.160 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.160 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.160 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.160 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.160 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.160 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:47.160 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.160 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.417 nvme0n1 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.417 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.675 nvme0n1 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: ]] 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.675 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.676 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.676 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.676 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.676 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:47.676 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.676 09:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.933 nvme0n1 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.933 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.191 nvme0n1 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.191 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.192 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.192 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.192 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:48.192 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.192 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.449 nvme0n1 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: ]] 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.449 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.450 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.707 nvme0n1 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.707 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.708 09:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.965 nvme0n1 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: ]] 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.965 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.223 nvme0n1 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.223 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.481 nvme0n1 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.481 09:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.739 nvme0n1 00:24:49.739 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.739 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.739 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.739 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.739 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.739 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.739 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.739 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.739 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.739 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: ]] 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.997 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.254 nvme0n1 00:24:50.254 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.254 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.254 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.254 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.254 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.254 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.254 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.254 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.254 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.254 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.254 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.254 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.255 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.512 nvme0n1 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: ]] 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.512 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.513 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.513 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.513 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:50.513 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.513 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:50.513 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:50.513 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:50.513 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:50.513 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.513 09:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.077 nvme0n1 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.077 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.078 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.078 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.078 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.078 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.078 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.078 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.078 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.078 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.078 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.335 nvme0n1 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.335 09:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.900 nvme0n1 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: ]] 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.900 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.157 nvme0n1 00:24:52.157 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.157 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.157 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.157 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.157 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.157 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.414 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.672 nvme0n1 00:24:52.672 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.672 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.672 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.672 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.672 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.672 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.672 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.672 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.672 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.672 09:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQ2YTRiMmE5NTE5ZjEwOGVkNjY3MGMxODRlYTg2ZTG2bf4y: 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: ]] 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI3ODJkNGE4ZDQ4YjhjOWE1YjVkMDQ1YzAyYTEyNWU0OTk5YmI3MzI0ZDc0NzZkYzc4ZWMwYTA5YjkyZjA2MgFqJXg=: 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.672 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.673 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.673 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.673 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.673 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.673 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.673 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.673 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.673 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.604 nvme0n1 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:53.604 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:53.605 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:53.605 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.605 09:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.169 nvme0n1 00:24:54.169 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.169 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.169 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.170 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.735 nvme0n1 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzNjMDA0OTJkMGZlZjgzMDNmNDAwYTg0MGQzOTA1ZTUwNDg3YWE4OGJhYmYxMDg1JXCqBg==: 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: ]] 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTBhZDhkYmNhNTUyMWVmYjZlYThmYWU5NDUzNGI3OGTLvGjx: 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.735 09:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.299 nvme0n1 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:55.299 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGM4MmNmNmE5ZGYzOTA4ZWU2ZmU0ZWI4YzM1OWQ5MTY1NWMzYjlhNTFiZTZiYmMzZTQ5ODViM2E3NTA5OTg1NDJAxtI=: 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.300 09:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.233 nvme0n1 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.233 request: 00:24:56.233 { 00:24:56.233 "name": "nvme0", 00:24:56.233 "trtype": "tcp", 00:24:56.233 "traddr": "10.0.0.1", 00:24:56.233 "adrfam": "ipv4", 00:24:56.233 "trsvcid": "4420", 00:24:56.233 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:56.233 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:56.233 "prchk_reftag": false, 00:24:56.233 "prchk_guard": false, 00:24:56.233 "hdgst": false, 00:24:56.233 "ddgst": false, 00:24:56.233 "allow_unrecognized_csi": false, 00:24:56.233 "method": "bdev_nvme_attach_controller", 00:24:56.233 "req_id": 1 00:24:56.233 } 00:24:56.233 Got JSON-RPC error response 00:24:56.233 response: 00:24:56.233 { 00:24:56.233 "code": -5, 00:24:56.233 "message": "Input/output error" 00:24:56.233 } 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.233 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.234 request: 00:24:56.234 { 00:24:56.234 "name": "nvme0", 00:24:56.234 "trtype": "tcp", 00:24:56.234 "traddr": "10.0.0.1", 00:24:56.234 "adrfam": "ipv4", 00:24:56.234 "trsvcid": "4420", 00:24:56.234 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:56.234 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:56.234 "prchk_reftag": false, 00:24:56.234 "prchk_guard": false, 00:24:56.234 "hdgst": false, 00:24:56.234 "ddgst": false, 00:24:56.234 "dhchap_key": "key2", 00:24:56.234 "allow_unrecognized_csi": false, 00:24:56.234 "method": "bdev_nvme_attach_controller", 00:24:56.234 "req_id": 1 00:24:56.234 } 00:24:56.234 Got JSON-RPC error response 00:24:56.234 response: 00:24:56.234 { 00:24:56.234 "code": -5, 00:24:56.234 "message": "Input/output error" 00:24:56.234 } 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.234 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.492 request: 00:24:56.492 { 00:24:56.492 "name": "nvme0", 00:24:56.492 "trtype": "tcp", 00:24:56.492 "traddr": "10.0.0.1", 00:24:56.492 "adrfam": "ipv4", 00:24:56.492 "trsvcid": "4420", 00:24:56.492 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:56.492 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:56.492 "prchk_reftag": false, 00:24:56.492 "prchk_guard": false, 00:24:56.492 "hdgst": false, 00:24:56.492 "ddgst": false, 00:24:56.492 "dhchap_key": "key1", 00:24:56.492 "dhchap_ctrlr_key": "ckey2", 00:24:56.492 "allow_unrecognized_csi": false, 00:24:56.492 "method": "bdev_nvme_attach_controller", 00:24:56.492 "req_id": 1 00:24:56.492 } 00:24:56.492 Got JSON-RPC error response 00:24:56.492 response: 00:24:56.492 { 00:24:56.492 "code": -5, 00:24:56.492 "message": "Input/output error" 00:24:56.492 } 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.492 nvme0n1 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.492 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.750 request: 00:24:56.750 { 00:24:56.750 "name": "nvme0", 00:24:56.750 "dhchap_key": "key1", 00:24:56.750 "dhchap_ctrlr_key": "ckey2", 00:24:56.750 "method": "bdev_nvme_set_keys", 00:24:56.750 "req_id": 1 00:24:56.750 } 00:24:56.750 Got JSON-RPC error response 00:24:56.750 response: 00:24:56.750 { 00:24:56.750 "code": -13, 00:24:56.750 "message": "Permission denied" 00:24:56.750 } 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:56.750 09:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:57.681 09:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.681 09:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:57.681 09:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.681 09:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.681 09:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.681 09:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:57.681 09:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMyODk2ZjcxN2NjNjliODAyZjBiOTkxZDBkZjI3Y2QwMWVkZTQ4ZjFmNGQzYzM4x9no9w==: 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: ]] 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzIxZWMxYjg1ZGNiMTRjMjJiODVjNmM5MWE0NzU5ZmFkYzEzMTQwYzg1Y2NiNTJiyDJdhQ==: 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.053 nvme0n1 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OTQ0ZGI1NTRhMzEzMTM0ZDg4MjM2MmQ5MTMxZDPj7hAl: 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: ]] 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQ5NzJlOGI3MGQ0ZWU4NTVkMWU4ZjkxZjJlZjg0M2KXnA57: 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.053 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:59.054 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.054 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.054 request: 00:24:59.054 { 00:24:59.054 "name": "nvme0", 00:24:59.054 "dhchap_key": "key2", 00:24:59.054 "dhchap_ctrlr_key": "ckey1", 00:24:59.054 "method": "bdev_nvme_set_keys", 00:24:59.054 "req_id": 1 00:24:59.054 } 00:24:59.054 Got JSON-RPC error response 00:24:59.054 response: 00:24:59.054 { 00:24:59.054 "code": -13, 00:24:59.054 "message": "Permission denied" 00:24:59.054 } 00:24:59.054 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:59.054 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:59.054 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:59.054 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:59.054 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:59.054 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.054 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.054 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.054 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:59.054 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.054 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:59.054 09:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.425 rmmod nvme_tcp 00:25:00.425 rmmod nvme_fabrics 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:00.425 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3452095 ']' 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3452095 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3452095 ']' 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3452095 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3452095 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3452095' 00:25:00.426 killing process with pid 3452095 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3452095 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3452095 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.426 09:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.951 09:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:02.951 09:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:02.951 09:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:02.951 09:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:02.951 09:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:02.951 09:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:02.951 09:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:02.951 09:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:02.951 09:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:02.951 09:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:02.951 09:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:02.951 09:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:02.951 09:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:04.848 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:04.848 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:04.848 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:04.848 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:04.848 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:04.848 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:04.848 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:04.848 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:05.106 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:05.106 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:05.106 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:05.106 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:05.106 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:05.106 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:05.106 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:05.106 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:06.041 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:06.041 09:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.8LF /tmp/spdk.key-null.DhR /tmp/spdk.key-sha256.xBS /tmp/spdk.key-sha384.gs6 /tmp/spdk.key-sha512.iTZ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:06.041 09:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:08.571 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:08.571 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:08.571 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:08.571 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:08.571 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:08.571 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:08.571 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:08.571 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:08.571 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:08.571 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:08.571 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:08.571 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:08.571 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:08.571 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:08.571 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:08.571 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:08.571 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:08.571 00:25:08.571 real 0m52.387s 00:25:08.571 user 0m47.657s 00:25:08.571 sys 0m11.759s 00:25:08.571 09:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.571 09:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.571 ************************************ 00:25:08.571 END TEST nvmf_auth_host 00:25:08.571 ************************************ 00:25:08.571 09:36:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:08.571 09:36:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:08.571 09:36:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:08.571 09:36:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.571 09:36:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.571 ************************************ 00:25:08.571 START TEST nvmf_digest 00:25:08.571 ************************************ 00:25:08.571 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:08.571 * Looking for test storage... 00:25:08.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:08.571 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:08.571 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:08.572 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:08.830 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.831 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:08.831 09:36:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:08.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.831 --rc genhtml_branch_coverage=1 00:25:08.831 --rc genhtml_function_coverage=1 00:25:08.831 --rc genhtml_legend=1 00:25:08.831 --rc geninfo_all_blocks=1 00:25:08.831 --rc geninfo_unexecuted_blocks=1 00:25:08.831 00:25:08.831 ' 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:08.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.831 --rc genhtml_branch_coverage=1 00:25:08.831 --rc genhtml_function_coverage=1 00:25:08.831 --rc genhtml_legend=1 00:25:08.831 --rc geninfo_all_blocks=1 00:25:08.831 --rc geninfo_unexecuted_blocks=1 00:25:08.831 00:25:08.831 ' 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:08.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.831 --rc genhtml_branch_coverage=1 00:25:08.831 --rc genhtml_function_coverage=1 00:25:08.831 --rc genhtml_legend=1 00:25:08.831 --rc geninfo_all_blocks=1 00:25:08.831 --rc geninfo_unexecuted_blocks=1 00:25:08.831 00:25:08.831 ' 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:08.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.831 --rc genhtml_branch_coverage=1 00:25:08.831 --rc genhtml_function_coverage=1 00:25:08.831 --rc genhtml_legend=1 00:25:08.831 --rc geninfo_all_blocks=1 00:25:08.831 --rc geninfo_unexecuted_blocks=1 00:25:08.831 00:25:08.831 ' 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:08.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:08.831 09:36:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:14.091 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:14.091 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:14.091 Found net devices under 0000:af:00.0: cvl_0_0 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:14.091 Found net devices under 0000:af:00.1: cvl_0_1 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:14.091 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:14.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:25:14.092 00:25:14.092 --- 10.0.0.2 ping statistics --- 00:25:14.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.092 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:25:14.092 00:25:14.092 --- 10.0.0.1 ping statistics --- 00:25:14.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.092 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:14.092 ************************************ 00:25:14.092 START TEST nvmf_digest_clean 00:25:14.092 ************************************ 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3466088 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3466088 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3466088 ']' 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.092 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:14.350 [2024-12-13 09:36:26.470566] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:25:14.350 [2024-12-13 09:36:26.470611] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.350 [2024-12-13 09:36:26.539296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.350 [2024-12-13 09:36:26.579532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.350 [2024-12-13 09:36:26.579567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.350 [2024-12-13 09:36:26.579574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.350 [2024-12-13 09:36:26.579580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.350 [2024-12-13 09:36:26.579585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.350 [2024-12-13 09:36:26.580072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.350 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.350 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:14.350 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:14.350 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.350 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:14.350 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.350 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:14.350 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:14.350 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:14.350 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.350 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:14.608 null0 00:25:14.608 [2024-12-13 09:36:26.740302] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.608 [2024-12-13 09:36:26.764493] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3466261 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3466261 /var/tmp/bperf.sock 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3466261 ']' 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:14.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:14.608 [2024-12-13 09:36:26.818522] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:25:14.608 [2024-12-13 09:36:26.818564] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466261 ] 00:25:14.608 [2024-12-13 09:36:26.882153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.608 [2024-12-13 09:36:26.922712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:14.608 09:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:14.865 09:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:14.865 09:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:15.430 nvme0n1 00:25:15.430 09:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:15.430 09:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:15.430 Running I/O for 2 seconds... 00:25:17.731 25791.00 IOPS, 100.75 MiB/s [2024-12-13T08:36:30.097Z] 25975.50 IOPS, 101.47 MiB/s 00:25:17.731 Latency(us) 00:25:17.731 [2024-12-13T08:36:30.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.731 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:17.731 nvme0n1 : 2.01 25992.26 101.53 0.00 0.00 4918.22 2402.99 11421.99 00:25:17.731 [2024-12-13T08:36:30.097Z] =================================================================================================================== 00:25:17.731 [2024-12-13T08:36:30.097Z] Total : 25992.26 101.53 0.00 0.00 4918.22 2402.99 11421.99 00:25:17.731 { 00:25:17.731 "results": [ 00:25:17.731 { 00:25:17.731 "job": "nvme0n1", 00:25:17.731 "core_mask": "0x2", 00:25:17.731 "workload": "randread", 00:25:17.731 "status": "finished", 00:25:17.731 "queue_depth": 128, 00:25:17.731 "io_size": 4096, 00:25:17.731 "runtime": 2.006059, 00:25:17.731 "iops": 25992.256459057287, 00:25:17.731 "mibps": 101.53225179319253, 00:25:17.731 "io_failed": 0, 00:25:17.731 "io_timeout": 0, 00:25:17.731 "avg_latency_us": 4918.221289190142, 00:25:17.731 "min_latency_us": 2402.9866666666667, 00:25:17.731 "max_latency_us": 11421.988571428572 00:25:17.731 } 00:25:17.731 ], 00:25:17.731 "core_count": 1 00:25:17.731 } 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:17.731 | select(.opcode=="crc32c") 00:25:17.731 | "\(.module_name) \(.executed)"' 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3466261 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3466261 ']' 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3466261 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3466261 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3466261' 00:25:17.731 killing process with pid 3466261 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3466261 00:25:17.731 Received shutdown signal, test time was about 2.000000 seconds 00:25:17.731 00:25:17.731 Latency(us) 00:25:17.731 [2024-12-13T08:36:30.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.731 [2024-12-13T08:36:30.097Z] =================================================================================================================== 00:25:17.731 [2024-12-13T08:36:30.097Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:17.731 09:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3466261 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3466776 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3466776 /var/tmp/bperf.sock 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3466776 ']' 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:17.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:17.989 [2024-12-13 09:36:30.191912] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:25:17.989 [2024-12-13 09:36:30.191962] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466776 ] 00:25:17.989 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:17.989 Zero copy mechanism will not be used. 00:25:17.989 [2024-12-13 09:36:30.256237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.989 [2024-12-13 09:36:30.292904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:17.989 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:18.246 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:18.246 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:18.503 nvme0n1 00:25:18.761 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:18.761 09:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:18.761 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:18.761 Zero copy mechanism will not be used. 00:25:18.761 Running I/O for 2 seconds... 00:25:20.630 5616.00 IOPS, 702.00 MiB/s [2024-12-13T08:36:32.996Z] 5303.50 IOPS, 662.94 MiB/s 00:25:20.630 Latency(us) 00:25:20.630 [2024-12-13T08:36:32.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.630 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:20.630 nvme0n1 : 2.00 5306.66 663.33 0.00 0.00 3012.37 764.59 4899.60 00:25:20.630 [2024-12-13T08:36:32.996Z] =================================================================================================================== 00:25:20.630 [2024-12-13T08:36:32.996Z] Total : 5306.66 663.33 0.00 0.00 3012.37 764.59 4899.60 00:25:20.630 { 00:25:20.630 "results": [ 00:25:20.630 { 00:25:20.630 "job": "nvme0n1", 00:25:20.630 "core_mask": "0x2", 00:25:20.630 "workload": "randread", 00:25:20.630 "status": "finished", 00:25:20.630 "queue_depth": 16, 00:25:20.630 "io_size": 131072, 00:25:20.630 "runtime": 2.001824, 00:25:20.630 "iops": 5306.660325782886, 00:25:20.630 "mibps": 663.3325407228607, 00:25:20.630 "io_failed": 0, 00:25:20.630 "io_timeout": 0, 00:25:20.630 "avg_latency_us": 3012.3653334409164, 00:25:20.630 "min_latency_us": 764.5866666666667, 00:25:20.630 "max_latency_us": 4899.596190476191 00:25:20.630 } 00:25:20.630 ], 00:25:20.630 "core_count": 1 00:25:20.630 } 00:25:20.630 09:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:20.630 09:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:20.630 09:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:20.630 09:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:20.630 | select(.opcode=="crc32c") 00:25:20.630 | "\(.module_name) \(.executed)"' 00:25:20.630 09:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:20.888 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:20.888 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:20.888 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:20.888 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:20.888 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3466776 00:25:20.888 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3466776 ']' 00:25:20.888 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3466776 00:25:20.888 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:20.888 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.888 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3466776 00:25:20.888 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:20.888 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:20.888 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3466776' 00:25:20.888 killing process with pid 3466776 00:25:20.888 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3466776 00:25:20.888 Received shutdown signal, test time was about 2.000000 seconds 00:25:20.888 00:25:20.888 Latency(us) 00:25:20.888 [2024-12-13T08:36:33.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.888 [2024-12-13T08:36:33.254Z] =================================================================================================================== 00:25:20.888 [2024-12-13T08:36:33.254Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.888 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3466776 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3467244 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3467244 /var/tmp/bperf.sock 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3467244 ']' 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:21.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.147 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:21.147 [2024-12-13 09:36:33.419941] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:25:21.147 [2024-12-13 09:36:33.419987] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467244 ] 00:25:21.147 [2024-12-13 09:36:33.478572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.405 [2024-12-13 09:36:33.520934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.405 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.405 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:21.405 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:21.405 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:21.405 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:21.663 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:21.663 09:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:21.921 nvme0n1 00:25:22.191 09:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:22.191 09:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:22.191 Running I/O for 2 seconds... 00:25:24.135 27946.00 IOPS, 109.16 MiB/s [2024-12-13T08:36:36.501Z] 28384.50 IOPS, 110.88 MiB/s 00:25:24.135 Latency(us) 00:25:24.135 [2024-12-13T08:36:36.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.135 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:24.135 nvme0n1 : 2.00 28395.36 110.92 0.00 0.00 4502.20 2246.95 15478.98 00:25:24.135 [2024-12-13T08:36:36.501Z] =================================================================================================================== 00:25:24.135 [2024-12-13T08:36:36.501Z] Total : 28395.36 110.92 0.00 0.00 4502.20 2246.95 15478.98 00:25:24.135 { 00:25:24.135 "results": [ 00:25:24.135 { 00:25:24.135 "job": "nvme0n1", 00:25:24.135 "core_mask": "0x2", 00:25:24.135 "workload": "randwrite", 00:25:24.135 "status": "finished", 00:25:24.135 "queue_depth": 128, 00:25:24.135 "io_size": 4096, 00:25:24.135 "runtime": 2.003743, 00:25:24.135 "iops": 28395.358087339544, 00:25:24.135 "mibps": 110.9193675286701, 00:25:24.135 "io_failed": 0, 00:25:24.135 "io_timeout": 0, 00:25:24.135 "avg_latency_us": 4502.2029757699165, 00:25:24.135 "min_latency_us": 2246.9485714285715, 00:25:24.135 "max_latency_us": 15478.979047619048 00:25:24.135 } 00:25:24.135 ], 00:25:24.135 "core_count": 1 00:25:24.135 } 00:25:24.135 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:24.135 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:24.135 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:24.135 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:24.135 | select(.opcode=="crc32c") 00:25:24.135 | "\(.module_name) \(.executed)"' 00:25:24.135 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:24.394 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:24.394 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:24.394 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:24.394 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:24.394 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3467244 00:25:24.394 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3467244 ']' 00:25:24.394 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3467244 00:25:24.394 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:24.394 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:24.394 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3467244 00:25:24.394 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:24.394 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:24.394 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3467244' 00:25:24.394 killing process with pid 3467244 00:25:24.394 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3467244 00:25:24.394 Received shutdown signal, test time was about 2.000000 seconds 00:25:24.394 00:25:24.394 Latency(us) 00:25:24.394 [2024-12-13T08:36:36.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.394 [2024-12-13T08:36:36.760Z] =================================================================================================================== 00:25:24.394 [2024-12-13T08:36:36.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:24.394 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3467244 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3467913 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3467913 /var/tmp/bperf.sock 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3467913 ']' 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:24.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:24.653 [2024-12-13 09:36:36.856853] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:25:24.653 [2024-12-13 09:36:36.856901] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467913 ] 00:25:24.653 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:24.653 Zero copy mechanism will not be used. 00:25:24.653 [2024-12-13 09:36:36.919255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.653 [2024-12-13 09:36:36.957373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:24.653 09:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:24.911 09:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:24.911 09:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:25.170 nvme0n1 00:25:25.428 09:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:25.428 09:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:25.428 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:25.428 Zero copy mechanism will not be used. 00:25:25.428 Running I/O for 2 seconds... 00:25:27.298 6383.00 IOPS, 797.88 MiB/s [2024-12-13T08:36:39.664Z] 6320.00 IOPS, 790.00 MiB/s 00:25:27.298 Latency(us) 00:25:27.298 [2024-12-13T08:36:39.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.298 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:27.298 nvme0n1 : 2.00 6319.51 789.94 0.00 0.00 2527.83 1591.59 6678.43 00:25:27.298 [2024-12-13T08:36:39.664Z] =================================================================================================================== 00:25:27.298 [2024-12-13T08:36:39.664Z] Total : 6319.51 789.94 0.00 0.00 2527.83 1591.59 6678.43 00:25:27.298 { 00:25:27.298 "results": [ 00:25:27.298 { 00:25:27.298 "job": "nvme0n1", 00:25:27.298 "core_mask": "0x2", 00:25:27.298 "workload": "randwrite", 00:25:27.298 "status": "finished", 00:25:27.298 "queue_depth": 16, 00:25:27.298 "io_size": 131072, 00:25:27.298 "runtime": 2.003161, 00:25:27.298 "iops": 6319.512011266193, 00:25:27.298 "mibps": 789.9390014082742, 00:25:27.298 "io_failed": 0, 00:25:27.298 "io_timeout": 0, 00:25:27.298 "avg_latency_us": 2527.830393584087, 00:25:27.298 "min_latency_us": 1591.5885714285714, 00:25:27.298 "max_latency_us": 6678.430476190476 00:25:27.298 } 00:25:27.298 ], 00:25:27.298 "core_count": 1 00:25:27.298 } 00:25:27.298 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:27.298 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:27.298 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:27.298 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:27.298 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:27.298 | select(.opcode=="crc32c") 00:25:27.298 | "\(.module_name) \(.executed)"' 00:25:27.557 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:27.557 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:27.557 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:27.557 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:27.557 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3467913 00:25:27.557 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3467913 ']' 00:25:27.557 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3467913 00:25:27.557 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:27.557 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.557 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3467913 00:25:27.557 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:27.557 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:27.557 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3467913' 00:25:27.557 killing process with pid 3467913 00:25:27.557 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3467913 00:25:27.557 Received shutdown signal, test time was about 2.000000 seconds 00:25:27.557 00:25:27.557 Latency(us) 00:25:27.557 [2024-12-13T08:36:39.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.557 [2024-12-13T08:36:39.923Z] =================================================================================================================== 00:25:27.557 [2024-12-13T08:36:39.923Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:27.557 09:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3467913 00:25:27.815 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3466088 00:25:27.815 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3466088 ']' 00:25:27.815 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3466088 00:25:27.815 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:27.815 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.815 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3466088 00:25:27.815 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:27.815 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:27.815 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3466088' 00:25:27.815 killing process with pid 3466088 00:25:27.815 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3466088 00:25:27.815 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3466088 00:25:28.074 00:25:28.074 real 0m13.868s 00:25:28.074 user 0m26.427s 00:25:28.074 sys 0m4.533s 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:28.074 ************************************ 00:25:28.074 END TEST nvmf_digest_clean 00:25:28.074 ************************************ 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:28.074 ************************************ 00:25:28.074 START TEST nvmf_digest_error 00:25:28.074 ************************************ 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3468401 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3468401 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3468401 ']' 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.074 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:28.074 [2024-12-13 09:36:40.392368] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:25:28.074 [2024-12-13 09:36:40.392411] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.332 [2024-12-13 09:36:40.458348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.332 [2024-12-13 09:36:40.498607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.332 [2024-12-13 09:36:40.498643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.332 [2024-12-13 09:36:40.498652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.332 [2024-12-13 09:36:40.498658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.332 [2024-12-13 09:36:40.498663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.332 [2024-12-13 09:36:40.499171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.332 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:28.332 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:28.333 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:28.333 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:28.333 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:28.333 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.333 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:28.333 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.333 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:28.333 [2024-12-13 09:36:40.579643] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:28.333 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.333 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:28.333 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:28.333 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.333 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:28.333 null0 00:25:28.333 [2024-12-13 09:36:40.672055] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.333 [2024-12-13 09:36:40.696245] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3468544 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3468544 /var/tmp/bperf.sock 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3468544 ']' 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:28.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:28.591 [2024-12-13 09:36:40.731339] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:25:28.591 [2024-12-13 09:36:40.731380] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3468544 ] 00:25:28.591 [2024-12-13 09:36:40.795034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.591 [2024-12-13 09:36:40.836804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:28.591 09:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:28.850 09:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:28.850 09:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.850 09:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:28.850 09:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.850 09:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:28.850 09:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:29.108 nvme0n1 00:25:29.108 09:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:29.108 09:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.108 09:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:29.108 09:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.108 09:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:29.108 09:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:29.108 Running I/O for 2 seconds... 00:25:29.366 [2024-12-13 09:36:41.487596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.366 [2024-12-13 09:36:41.487626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.366 [2024-12-13 09:36:41.487636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.366 [2024-12-13 09:36:41.499786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.366 [2024-12-13 09:36:41.499811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.366 [2024-12-13 09:36:41.499819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.366 [2024-12-13 09:36:41.512019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.366 [2024-12-13 09:36:41.512044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.366 [2024-12-13 09:36:41.512053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.366 [2024-12-13 09:36:41.519943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.366 [2024-12-13 09:36:41.519964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.366 [2024-12-13 09:36:41.519973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.366 [2024-12-13 09:36:41.532016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.366 [2024-12-13 09:36:41.532037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.366 [2024-12-13 09:36:41.532045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.366 [2024-12-13 09:36:41.544596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.366 [2024-12-13 09:36:41.544617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.366 [2024-12-13 09:36:41.544625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.366 [2024-12-13 09:36:41.557417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.366 [2024-12-13 09:36:41.557438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.366 [2024-12-13 09:36:41.557446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.366 [2024-12-13 09:36:41.565986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.566007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.566016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.367 [2024-12-13 09:36:41.578086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.578107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.578115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.367 [2024-12-13 09:36:41.589520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.589541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.589549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.367 [2024-12-13 09:36:41.598004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.598025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.598033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.367 [2024-12-13 09:36:41.609415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.609436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.609444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.367 [2024-12-13 09:36:41.622379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.622400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.622408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.367 [2024-12-13 09:36:41.633862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.633882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.633890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.367 [2024-12-13 09:36:41.642544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.642565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.642573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.367 [2024-12-13 09:36:41.654207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.654227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.654238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.367 [2024-12-13 09:36:41.666105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.666125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.666133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.367 [2024-12-13 09:36:41.675031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.675051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.675059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.367 [2024-12-13 09:36:41.686534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.686554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.686562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.367 [2024-12-13 09:36:41.696636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.696656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.696664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.367 [2024-12-13 09:36:41.707312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.707332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.707341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.367 [2024-12-13 09:36:41.716170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.716191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.716200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.367 [2024-12-13 09:36:41.727873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.367 [2024-12-13 09:36:41.727894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.367 [2024-12-13 09:36:41.727902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.626 [2024-12-13 09:36:41.740458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.626 [2024-12-13 09:36:41.740480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.626 [2024-12-13 09:36:41.740488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.626 [2024-12-13 09:36:41.751767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.626 [2024-12-13 09:36:41.751793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.626 [2024-12-13 09:36:41.751802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.626 [2024-12-13 09:36:41.761048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.626 [2024-12-13 09:36:41.761068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.626 [2024-12-13 09:36:41.761076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.626 [2024-12-13 09:36:41.772155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.626 [2024-12-13 09:36:41.772176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.626 [2024-12-13 09:36:41.772184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.626 [2024-12-13 09:36:41.781169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.626 [2024-12-13 09:36:41.781189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.626 [2024-12-13 09:36:41.781197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.626 [2024-12-13 09:36:41.790770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.626 [2024-12-13 09:36:41.790791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.626 [2024-12-13 09:36:41.790799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.626 [2024-12-13 09:36:41.800527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.626 [2024-12-13 09:36:41.800548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.626 [2024-12-13 09:36:41.800556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.626 [2024-12-13 09:36:41.808910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.626 [2024-12-13 09:36:41.808931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.626 [2024-12-13 09:36:41.808939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.626 [2024-12-13 09:36:41.818755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.626 [2024-12-13 09:36:41.818775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.626 [2024-12-13 09:36:41.818783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.626 [2024-12-13 09:36:41.828296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.626 [2024-12-13 09:36:41.828317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.626 [2024-12-13 09:36:41.828325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.626 [2024-12-13 09:36:41.840199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.626 [2024-12-13 09:36:41.840220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.626 [2024-12-13 09:36:41.840228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.626 [2024-12-13 09:36:41.849151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.626 [2024-12-13 09:36:41.849171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.626 [2024-12-13 09:36:41.849179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.626 [2024-12-13 09:36:41.861058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.626 [2024-12-13 09:36:41.861078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.626 [2024-12-13 09:36:41.861085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.626 [2024-12-13 09:36:41.873834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.626 [2024-12-13 09:36:41.873855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.627 [2024-12-13 09:36:41.873863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.627 [2024-12-13 09:36:41.884655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.627 [2024-12-13 09:36:41.884675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.627 [2024-12-13 09:36:41.884682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.627 [2024-12-13 09:36:41.893289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.627 [2024-12-13 09:36:41.893310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.627 [2024-12-13 09:36:41.893318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.627 [2024-12-13 09:36:41.906300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.627 [2024-12-13 09:36:41.906323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.627 [2024-12-13 09:36:41.906331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.627 [2024-12-13 09:36:41.917870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.627 [2024-12-13 09:36:41.917891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.627 [2024-12-13 09:36:41.917899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.627 [2024-12-13 09:36:41.926404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.627 [2024-12-13 09:36:41.926431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.627 [2024-12-13 09:36:41.926439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.627 [2024-12-13 09:36:41.938291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.627 [2024-12-13 09:36:41.938312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.627 [2024-12-13 09:36:41.938321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.627 [2024-12-13 09:36:41.948164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.627 [2024-12-13 09:36:41.948184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.627 [2024-12-13 09:36:41.948192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.627 [2024-12-13 09:36:41.957291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.627 [2024-12-13 09:36:41.957312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.627 [2024-12-13 09:36:41.957320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.627 [2024-12-13 09:36:41.969022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.627 [2024-12-13 09:36:41.969041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.627 [2024-12-13 09:36:41.969049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.627 [2024-12-13 09:36:41.978436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.627 [2024-12-13 09:36:41.978464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.627 [2024-12-13 09:36:41.978472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.627 [2024-12-13 09:36:41.986835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.627 [2024-12-13 09:36:41.986855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.627 [2024-12-13 09:36:41.986863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.886 [2024-12-13 09:36:41.997812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.886 [2024-12-13 09:36:41.997833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.886 [2024-12-13 09:36:41.997842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.886 [2024-12-13 09:36:42.005566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.886 [2024-12-13 09:36:42.005588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.886 [2024-12-13 09:36:42.005597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.886 [2024-12-13 09:36:42.015797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.886 [2024-12-13 09:36:42.015818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.886 [2024-12-13 09:36:42.015826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.886 [2024-12-13 09:36:42.024175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.886 [2024-12-13 09:36:42.024196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.886 [2024-12-13 09:36:42.024204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.886 [2024-12-13 09:36:42.035236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.886 [2024-12-13 09:36:42.035258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.886 [2024-12-13 09:36:42.035266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.886 [2024-12-13 09:36:42.045602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.886 [2024-12-13 09:36:42.045623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.886 [2024-12-13 09:36:42.045631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.886 [2024-12-13 09:36:42.056471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.886 [2024-12-13 09:36:42.056493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.886 [2024-12-13 09:36:42.056501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.886 [2024-12-13 09:36:42.065222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.886 [2024-12-13 09:36:42.065243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.886 [2024-12-13 09:36:42.065251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.886 [2024-12-13 09:36:42.076123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.886 [2024-12-13 09:36:42.076143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.076152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.085229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.085249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.085258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.096802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.096824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.096836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.107159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.107180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.107188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.118592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.118612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.118619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.127563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.127584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.127592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.136059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.136079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.136087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.146540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.146561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.146569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.155037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.155058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.155066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.166435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.166462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.166470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.174737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.174758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.174766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.186882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.186908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.186916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.198471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.198493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.198500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.211286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.211308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.211316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.219704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.219725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.219733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.231163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.231184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.231192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.242006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.242025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.242033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.887 [2024-12-13 09:36:42.250357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:29.887 [2024-12-13 09:36:42.250377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.887 [2024-12-13 09:36:42.250386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.146 [2024-12-13 09:36:42.262023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.146 [2024-12-13 09:36:42.262046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.146 [2024-12-13 09:36:42.262054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.146 [2024-12-13 09:36:42.272383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.146 [2024-12-13 09:36:42.272404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.146 [2024-12-13 09:36:42.272413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.146 [2024-12-13 09:36:42.282098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.146 [2024-12-13 09:36:42.282119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.146 [2024-12-13 09:36:42.282128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.146 [2024-12-13 09:36:42.291406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.146 [2024-12-13 09:36:42.291427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.146 [2024-12-13 09:36:42.291435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.146 [2024-12-13 09:36:42.300855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.146 [2024-12-13 09:36:42.300876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.146 [2024-12-13 09:36:42.300883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.146 [2024-12-13 09:36:42.312083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.146 [2024-12-13 09:36:42.312103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.146 [2024-12-13 09:36:42.312111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.146 [2024-12-13 09:36:42.320603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.146 [2024-12-13 09:36:42.320624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.146 [2024-12-13 09:36:42.320632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.146 [2024-12-13 09:36:42.331851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.146 [2024-12-13 09:36:42.331872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.146 [2024-12-13 09:36:42.331880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.146 [2024-12-13 09:36:42.343258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.146 [2024-12-13 09:36:42.343279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.146 [2024-12-13 09:36:42.343287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.352018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.352038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.352046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.364079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.364099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.364111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.371815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.371835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.371843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.382381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.382401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.382410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.391405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.391425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.391433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.401693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.401723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.401731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.410783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.410803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.410812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.419524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.419544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.419552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.430699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.430720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.430728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.441718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.441738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.441747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.450168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.450192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.450200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.462036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.462058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.462067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 24415.00 IOPS, 95.37 MiB/s [2024-12-13T08:36:42.513Z] [2024-12-13 09:36:42.471251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.471272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.471283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.479467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.479488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.479497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.490210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.490230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.490238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.499679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.499700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.499708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.147 [2024-12-13 09:36:42.509297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.147 [2024-12-13 09:36:42.509318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.147 [2024-12-13 09:36:42.509326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.406 [2024-12-13 09:36:42.517825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.406 [2024-12-13 09:36:42.517846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.406 [2024-12-13 09:36:42.517855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.406 [2024-12-13 09:36:42.527405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.406 [2024-12-13 09:36:42.527425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.406 [2024-12-13 09:36:42.527437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.406 [2024-12-13 09:36:42.537519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.406 [2024-12-13 09:36:42.537539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.406 [2024-12-13 09:36:42.537547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.406 [2024-12-13 09:36:42.546918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.406 [2024-12-13 09:36:42.546939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.406 [2024-12-13 09:36:42.546947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.406 [2024-12-13 09:36:42.556653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.406 [2024-12-13 09:36:42.556674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.406 [2024-12-13 09:36:42.556682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.406 [2024-12-13 09:36:42.564031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.406 [2024-12-13 09:36:42.564051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.406 [2024-12-13 09:36:42.564059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.406 [2024-12-13 09:36:42.573878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.406 [2024-12-13 09:36:42.573899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.406 [2024-12-13 09:36:42.573907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.406 [2024-12-13 09:36:42.584657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.406 [2024-12-13 09:36:42.584678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.406 [2024-12-13 09:36:42.584686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.406 [2024-12-13 09:36:42.595179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.406 [2024-12-13 09:36:42.595200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.406 [2024-12-13 09:36:42.595207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.605884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.605905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.605913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.613766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.613790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.613798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.626562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.626583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.626590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.635068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.635088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.635096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.646703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.646724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.646732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.658309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.658330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.658338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.669620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.669641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.669649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.678210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.678230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.678238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.687989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.688008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.688017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.697167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.697187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.697195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.705723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.705744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.705751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.715383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.715403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.715410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.724542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.724561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.724569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.733684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.733705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.733713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.742848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.742868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.742876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.751257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.751278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.751286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.760836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.760855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.760863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.407 [2024-12-13 09:36:42.771490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.407 [2024-12-13 09:36:42.771513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.407 [2024-12-13 09:36:42.771522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.779650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.779671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.666 [2024-12-13 09:36:42.779682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.789786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.789806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.666 [2024-12-13 09:36:42.789814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.799337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.799358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.666 [2024-12-13 09:36:42.799366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.809100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.809120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.666 [2024-12-13 09:36:42.809128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.818296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.818316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.666 [2024-12-13 09:36:42.818324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.827092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.827113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.666 [2024-12-13 09:36:42.827121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.837121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.837142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.666 [2024-12-13 09:36:42.837150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.846065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.846085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.666 [2024-12-13 09:36:42.846093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.854570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.854591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.666 [2024-12-13 09:36:42.854599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.864185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.864209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.666 [2024-12-13 09:36:42.864216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.873436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.873470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.666 [2024-12-13 09:36:42.873479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.882702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.882722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.666 [2024-12-13 09:36:42.882730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.892563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.892583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.666 [2024-12-13 09:36:42.892591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.900943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.900964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.666 [2024-12-13 09:36:42.900972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.913230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.913251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.666 [2024-12-13 09:36:42.913259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.666 [2024-12-13 09:36:42.924350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.666 [2024-12-13 09:36:42.924371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.667 [2024-12-13 09:36:42.924380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.667 [2024-12-13 09:36:42.937610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.667 [2024-12-13 09:36:42.937630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.667 [2024-12-13 09:36:42.937638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.667 [2024-12-13 09:36:42.947099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.667 [2024-12-13 09:36:42.947120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.667 [2024-12-13 09:36:42.947128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.667 [2024-12-13 09:36:42.956030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.667 [2024-12-13 09:36:42.956050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.667 [2024-12-13 09:36:42.956057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.667 [2024-12-13 09:36:42.964793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.667 [2024-12-13 09:36:42.964813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.667 [2024-12-13 09:36:42.964821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.667 [2024-12-13 09:36:42.974421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.667 [2024-12-13 09:36:42.974441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.667 [2024-12-13 09:36:42.974453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.667 [2024-12-13 09:36:42.985477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.667 [2024-12-13 09:36:42.985497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.667 [2024-12-13 09:36:42.985505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.667 [2024-12-13 09:36:42.993618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.667 [2024-12-13 09:36:42.993638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.667 [2024-12-13 09:36:42.993645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.667 [2024-12-13 09:36:43.005403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.667 [2024-12-13 09:36:43.005423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.667 [2024-12-13 09:36:43.005431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.667 [2024-12-13 09:36:43.016121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.667 [2024-12-13 09:36:43.016141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.667 [2024-12-13 09:36:43.016149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.667 [2024-12-13 09:36:43.024986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.667 [2024-12-13 09:36:43.025007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.667 [2024-12-13 09:36:43.025015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.034929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.034949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.034962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.045147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.045166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.045175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.054423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.054445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.054458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.064776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.064796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.064804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.073877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.073898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.073905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.082516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.082537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.082545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.092809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.092829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.092838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.102751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.102772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.102779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.111107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.111127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.111135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.123670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.123690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.123698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.135632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.135652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.135660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.143641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.143661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.143669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.156235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.156256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.156264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.167074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.167094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.167102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.178800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.178820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.178828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.187371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.187395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.187402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.199571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.199592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.199600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.207542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.207561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.207575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.218027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.218047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.218055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.227677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.227698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.227706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.240228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.240249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.240257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.250544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.250564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.250572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.258660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.926 [2024-12-13 09:36:43.258681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.926 [2024-12-13 09:36:43.258688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.926 [2024-12-13 09:36:43.270814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.927 [2024-12-13 09:36:43.270835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.927 [2024-12-13 09:36:43.270843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.927 [2024-12-13 09:36:43.279076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.927 [2024-12-13 09:36:43.279096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.927 [2024-12-13 09:36:43.279104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.927 [2024-12-13 09:36:43.289287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:30.927 [2024-12-13 09:36:43.289308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.927 [2024-12-13 09:36:43.289316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.185 [2024-12-13 09:36:43.298847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.298872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.298880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.309303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.309325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.309333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.317520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.317541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.317549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.328954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.328975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.328983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.338387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.338408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.338416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.346802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.346823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.346831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.356274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.356294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.356302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.365411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.365431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.365439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.376291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.376312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.376320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.384962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.384984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.384992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.395097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.395118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.395126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.407686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.407706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.407714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.417630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.417650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.417658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.425363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.425384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.425392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.436456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.436477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.436485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.445531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.445551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.445559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.454517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.454538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.454546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 [2024-12-13 09:36:43.463764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.463785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.463797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 25216.00 IOPS, 98.50 MiB/s [2024-12-13T08:36:43.552Z] [2024-12-13 09:36:43.475396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998ae0) 00:25:31.186 [2024-12-13 09:36:43.475417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.186 [2024-12-13 09:36:43.475425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.186 00:25:31.186 Latency(us) 00:25:31.186 [2024-12-13T08:36:43.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.186 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:31.186 nvme0n1 : 2.01 25212.41 98.49 0.00 0.00 5072.33 2543.42 17226.61 00:25:31.186 [2024-12-13T08:36:43.552Z] =================================================================================================================== 00:25:31.186 [2024-12-13T08:36:43.552Z] Total : 25212.41 98.49 0.00 0.00 5072.33 2543.42 17226.61 00:25:31.186 { 00:25:31.186 "results": [ 00:25:31.186 { 00:25:31.186 "job": "nvme0n1", 00:25:31.186 "core_mask": "0x2", 00:25:31.186 "workload": "randread", 00:25:31.186 "status": "finished", 00:25:31.186 "queue_depth": 128, 00:25:31.186 "io_size": 4096, 00:25:31.186 "runtime": 2.006869, 00:25:31.186 "iops": 25212.407984776284, 00:25:31.186 "mibps": 98.48596869053236, 00:25:31.186 "io_failed": 0, 00:25:31.186 "io_timeout": 0, 00:25:31.186 "avg_latency_us": 5072.331484436615, 00:25:31.186 "min_latency_us": 2543.4209523809523, 00:25:31.186 "max_latency_us": 17226.605714285713 00:25:31.186 } 00:25:31.186 ], 00:25:31.186 "core_count": 1 00:25:31.186 } 00:25:31.186 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:31.186 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:31.186 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:31.186 | .driver_specific 00:25:31.186 | .nvme_error 00:25:31.186 | .status_code 00:25:31.186 | .command_transient_transport_error' 00:25:31.186 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:31.445 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 198 > 0 )) 00:25:31.445 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3468544 00:25:31.445 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3468544 ']' 00:25:31.445 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3468544 00:25:31.445 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:31.445 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:31.445 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3468544 00:25:31.445 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:31.445 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:31.445 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3468544' 00:25:31.445 killing process with pid 3468544 00:25:31.445 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3468544 00:25:31.445 Received shutdown signal, test time was about 2.000000 seconds 00:25:31.445 00:25:31.445 Latency(us) 00:25:31.445 [2024-12-13T08:36:43.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.445 [2024-12-13T08:36:43.811Z] =================================================================================================================== 00:25:31.445 [2024-12-13T08:36:43.811Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:31.445 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3468544 00:25:31.704 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:31.704 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:31.704 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:31.704 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:31.704 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:31.704 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:31.704 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3469098 00:25:31.704 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3469098 /var/tmp/bperf.sock 00:25:31.704 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3469098 ']' 00:25:31.704 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:31.704 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.704 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:31.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:31.704 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.704 09:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:31.704 [2024-12-13 09:36:43.906285] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:25:31.704 [2024-12-13 09:36:43.906330] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469098 ] 00:25:31.704 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:31.704 Zero copy mechanism will not be used. 00:25:31.704 [2024-12-13 09:36:43.963401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.704 [2024-12-13 09:36:44.004573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.962 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.962 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:31.962 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:31.962 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:31.962 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:31.962 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.962 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:31.962 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.962 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:31.962 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:32.221 nvme0n1 00:25:32.221 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:32.221 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.221 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:32.221 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.221 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:32.221 09:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:32.481 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:32.481 Zero copy mechanism will not be used. 00:25:32.481 Running I/O for 2 seconds... 00:25:32.481 [2024-12-13 09:36:44.621344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.481 [2024-12-13 09:36:44.621376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.481 [2024-12-13 09:36:44.621387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.481 [2024-12-13 09:36:44.627445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.481 [2024-12-13 09:36:44.627491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.481 [2024-12-13 09:36:44.627500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.481 [2024-12-13 09:36:44.633398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.481 [2024-12-13 09:36:44.633421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.481 [2024-12-13 09:36:44.633429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.481 [2024-12-13 09:36:44.639421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.481 [2024-12-13 09:36:44.639444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.481 [2024-12-13 09:36:44.639457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.481 [2024-12-13 09:36:44.645354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.481 [2024-12-13 09:36:44.645375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.481 [2024-12-13 09:36:44.645383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.481 [2024-12-13 09:36:44.651099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.481 [2024-12-13 09:36:44.651120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.481 [2024-12-13 09:36:44.651128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.481 [2024-12-13 09:36:44.657124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.481 [2024-12-13 09:36:44.657145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.481 [2024-12-13 09:36:44.657153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.481 [2024-12-13 09:36:44.663232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.481 [2024-12-13 09:36:44.663254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.481 [2024-12-13 09:36:44.663262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.481 [2024-12-13 09:36:44.669560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.481 [2024-12-13 09:36:44.669582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.669590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.675722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.675744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.675752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.682004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.682025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.682033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.687718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.687741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.687749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.691394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.691416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.691424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.695996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.696017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.696025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.701737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.701759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.701771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.707221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.707243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.707251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.712756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.712781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.712790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.718527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.718549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.718558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.724140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.724163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.724172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.730176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.730199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.730207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.735965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.735987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.735995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.741827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.741849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.741857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.747666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.747687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.747696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.753511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.753537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.753544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.759207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.759229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.759237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.764708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.764730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.764738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.770509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.770530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.770539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.776326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.776347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.776356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.782092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.782114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.782122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.787694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.787716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.787724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.793283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.793305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.793313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.482 [2024-12-13 09:36:44.798775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.482 [2024-12-13 09:36:44.798797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.482 [2024-12-13 09:36:44.798806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.483 [2024-12-13 09:36:44.804849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.483 [2024-12-13 09:36:44.804872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.483 [2024-12-13 09:36:44.804880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.483 [2024-12-13 09:36:44.810355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.483 [2024-12-13 09:36:44.810382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.483 [2024-12-13 09:36:44.810390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.483 [2024-12-13 09:36:44.816209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.483 [2024-12-13 09:36:44.816231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.483 [2024-12-13 09:36:44.816239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.483 [2024-12-13 09:36:44.821898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.483 [2024-12-13 09:36:44.821920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.483 [2024-12-13 09:36:44.821928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.483 [2024-12-13 09:36:44.827391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.483 [2024-12-13 09:36:44.827413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.483 [2024-12-13 09:36:44.827421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.483 [2024-12-13 09:36:44.832995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.483 [2024-12-13 09:36:44.833016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.483 [2024-12-13 09:36:44.833024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.483 [2024-12-13 09:36:44.838539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.483 [2024-12-13 09:36:44.838560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.483 [2024-12-13 09:36:44.838568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.483 [2024-12-13 09:36:44.843981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.483 [2024-12-13 09:36:44.844003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.483 [2024-12-13 09:36:44.844012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.743 [2024-12-13 09:36:44.849432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.743 [2024-12-13 09:36:44.849460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.743 [2024-12-13 09:36:44.849475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.743 [2024-12-13 09:36:44.855246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.743 [2024-12-13 09:36:44.855267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.743 [2024-12-13 09:36:44.855275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.743 [2024-12-13 09:36:44.860726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.743 [2024-12-13 09:36:44.860748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.743 [2024-12-13 09:36:44.860756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.743 [2024-12-13 09:36:44.866203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.743 [2024-12-13 09:36:44.866226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.743 [2024-12-13 09:36:44.866234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.743 [2024-12-13 09:36:44.871897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.743 [2024-12-13 09:36:44.871919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.743 [2024-12-13 09:36:44.871927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.743 [2024-12-13 09:36:44.877387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.743 [2024-12-13 09:36:44.877410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.743 [2024-12-13 09:36:44.877420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.743 [2024-12-13 09:36:44.882851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.743 [2024-12-13 09:36:44.882874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.743 [2024-12-13 09:36:44.882882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.743 [2024-12-13 09:36:44.888485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.743 [2024-12-13 09:36:44.888507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.743 [2024-12-13 09:36:44.888516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.743 [2024-12-13 09:36:44.893928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.743 [2024-12-13 09:36:44.893949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.743 [2024-12-13 09:36:44.893958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.743 [2024-12-13 09:36:44.899421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.743 [2024-12-13 09:36:44.899443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.743 [2024-12-13 09:36:44.899457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.743 [2024-12-13 09:36:44.904477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.743 [2024-12-13 09:36:44.904500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.743 [2024-12-13 09:36:44.904508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.743 [2024-12-13 09:36:44.910101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.743 [2024-12-13 09:36:44.910124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.743 [2024-12-13 09:36:44.910133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:44.915562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:44.915584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:44.915593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:44.921219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:44.921241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:44.921249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:44.926828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:44.926852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:44.926860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:44.932459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:44.932481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:44.932489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:44.937965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:44.937986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:44.937995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:44.943495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:44.943517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:44.943529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:44.948885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:44.948907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:44.948915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:44.954329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:44.954350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:44.954359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:44.959751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:44.959773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:44.959781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:44.966042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:44.966065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:44.966073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:44.973587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:44.973610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:44.973618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:44.981529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:44.981552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:44.981560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:44.988650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:44.988673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:44.988681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:44.996268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:44.996290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:44.996298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:45.003108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:45.003134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:45.003142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:45.009050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:45.009072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:45.009080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:45.014986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:45.015008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:45.015016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:45.020701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:45.020733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:45.020742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:45.026398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:45.026419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:45.026427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:45.031948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:45.031970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:45.031978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:45.037648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:45.037670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:45.037678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:45.043466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:45.043487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:45.043496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:45.049113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:45.049135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:45.049143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:45.054869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:45.054890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:45.054898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:45.057896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:45.057918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:45.057926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:45.063262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:45.063284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:45.063291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.744 [2024-12-13 09:36:45.068648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.744 [2024-12-13 09:36:45.068670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.744 [2024-12-13 09:36:45.068678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.745 [2024-12-13 09:36:45.074042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.745 [2024-12-13 09:36:45.074063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.745 [2024-12-13 09:36:45.074070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.745 [2024-12-13 09:36:45.079344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.745 [2024-12-13 09:36:45.079365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.745 [2024-12-13 09:36:45.079373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.745 [2024-12-13 09:36:45.084806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.745 [2024-12-13 09:36:45.084827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.745 [2024-12-13 09:36:45.084835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:32.745 [2024-12-13 09:36:45.090340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.745 [2024-12-13 09:36:45.090362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.745 [2024-12-13 09:36:45.090370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:32.745 [2024-12-13 09:36:45.095884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.745 [2024-12-13 09:36:45.095906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.745 [2024-12-13 09:36:45.095917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:32.745 [2024-12-13 09:36:45.101674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.745 [2024-12-13 09:36:45.101697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.745 [2024-12-13 09:36:45.101706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:32.745 [2024-12-13 09:36:45.106984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:32.745 [2024-12-13 09:36:45.107007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.745 [2024-12-13 09:36:45.107015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.112685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.112707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.112715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.118007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.118029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.118037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.123274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.123296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.123303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.128755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.128777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.128787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.133866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.133889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.133897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.139278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.139300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.139308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.144406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.144430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.144439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.149845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.149866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.149874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.155786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.155807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.155815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.161403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.161425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.161432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.167346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.167368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.167376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.173097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.173117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.173125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.178915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.178936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.178944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.184622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.184642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.184650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.190786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.190807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.190815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.196302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.196323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.004 [2024-12-13 09:36:45.196331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.004 [2024-12-13 09:36:45.201933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.004 [2024-12-13 09:36:45.201954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.201962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.207890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.207911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.207918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.213596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.213617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.213625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.219788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.219814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.219822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.225830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.225851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.225859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.231693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.231713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.231721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.237655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.237676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.237684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.243352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.243372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.243384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.249191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.249212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.249220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.255094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.255116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.255124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.260979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.261005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.261014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.266549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.266570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.266578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.272176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.272197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.272205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.278227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.278248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.278256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.283925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.283946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.283954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.289608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.289629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.289637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.295286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.295308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.295316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.300896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.300917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.300925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.306335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.306357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.306364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.311869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.311890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.311898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.317553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.317575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.317583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.323097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.323118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.323126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.328824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.328845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.328854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.334409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.334431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.334438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.340153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.340175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.340186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.345716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.345737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.345745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.351481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.351502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.351510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.357206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.005 [2024-12-13 09:36:45.357228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.005 [2024-12-13 09:36:45.357236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.005 [2024-12-13 09:36:45.363084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.006 [2024-12-13 09:36:45.363105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.006 [2024-12-13 09:36:45.363113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.006 [2024-12-13 09:36:45.368514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.006 [2024-12-13 09:36:45.368535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.006 [2024-12-13 09:36:45.368543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.374180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.374203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.374210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.379755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.379778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.379787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.385278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.385301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.385309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.390669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.390696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.390704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.396088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.396110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.396119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.401596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.401619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.401627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.407118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.407137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.407146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.412674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.412695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.412703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.418055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.418076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.418084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.423284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.423305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.423313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.428604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.428626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.428635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.433838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.433859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.433867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.439098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.439120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.439128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.444312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.444333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.444340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.449692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.449713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.449720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.455018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.455039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.455047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.460471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.460492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.460499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.465791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.465812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.465820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.471021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.471042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.471050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.476244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.476265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.476273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.481426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.481447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.481463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.486649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.486670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.486678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.491886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.491907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.265 [2024-12-13 09:36:45.491915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.265 [2024-12-13 09:36:45.497227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.265 [2024-12-13 09:36:45.497248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.497256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.502643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.502665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.502673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.508034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.508055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.508062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.513466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.513488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.513495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.518937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.518957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.518965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.524363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.524384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.524392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.529811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.529836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.529844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.535278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.535299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.535307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.540829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.540850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.540859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.546383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.546404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.546412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.551811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.551833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.551841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.558097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.558119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.558128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.565424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.565460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.565469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.572751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.572774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.572782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.580362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.580384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.580392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.587803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.587825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.587834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.595573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.595595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.595603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.603073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.603094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.603103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.611037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.611060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.611069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.266 5388.00 IOPS, 673.50 MiB/s [2024-12-13T08:36:45.632Z] [2024-12-13 09:36:45.619487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.619510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.619519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.266 [2024-12-13 09:36:45.627763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.266 [2024-12-13 09:36:45.627785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.266 [2024-12-13 09:36:45.627794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.526 [2024-12-13 09:36:45.635736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.526 [2024-12-13 09:36:45.635759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.526 [2024-12-13 09:36:45.635770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.643029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.643053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.643061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.650008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.650034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.650042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.655824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.655845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.655854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.661400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.661422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.661430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.666967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.666989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.666997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.672308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.672329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.672337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.677829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.677850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.677858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.683858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.683879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.683888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.689483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.689503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.689512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.695100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.695121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.695129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.700904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.700926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.700934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.706721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.706742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.706750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.712226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.712247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.712254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.717775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.717795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.717803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.723277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.723298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.723306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.728953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.728974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.728981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.734688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.734709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.734717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.740408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.740429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.740437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.745921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.745942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.745953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.751945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.751966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.751974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.757776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.757797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.757804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.763357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.763379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.763387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.769087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.769109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.769117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.774881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.774902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.774910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.778665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.778685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.778693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.783361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.783382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.783389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.788671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.527 [2024-12-13 09:36:45.788692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.527 [2024-12-13 09:36:45.788701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.527 [2024-12-13 09:36:45.793951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.793976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.793984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.799283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.799304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.799312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.804732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.804754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.804761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.810651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.810673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.810681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.815934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.815955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.815963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.821385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.821407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.821414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.826304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.826326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.826334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.831606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.831627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.831635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.837067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.837088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.837096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.842644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.842665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.842673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.848124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.848145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.848153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.853574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.853595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.853603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.859001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.859022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.859031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.864755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.864777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.864785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.870394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.870416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.870424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.876145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.876166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.876173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.881822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.881844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.881852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.528 [2024-12-13 09:36:45.887438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.528 [2024-12-13 09:36:45.887465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.528 [2024-12-13 09:36:45.887476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.788 [2024-12-13 09:36:45.893057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.788 [2024-12-13 09:36:45.893081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.788 [2024-12-13 09:36:45.893089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.788 [2024-12-13 09:36:45.898903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.788 [2024-12-13 09:36:45.898925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.788 [2024-12-13 09:36:45.898933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.788 [2024-12-13 09:36:45.904350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.788 [2024-12-13 09:36:45.904371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.788 [2024-12-13 09:36:45.904379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.788 [2024-12-13 09:36:45.909785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.788 [2024-12-13 09:36:45.909807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.788 [2024-12-13 09:36:45.909814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.788 [2024-12-13 09:36:45.915011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.788 [2024-12-13 09:36:45.915032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.788 [2024-12-13 09:36:45.915040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.788 [2024-12-13 09:36:45.920294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.788 [2024-12-13 09:36:45.920316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.788 [2024-12-13 09:36:45.920324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.788 [2024-12-13 09:36:45.925690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.788 [2024-12-13 09:36:45.925711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.788 [2024-12-13 09:36:45.925719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.788 [2024-12-13 09:36:45.930926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.788 [2024-12-13 09:36:45.930947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.788 [2024-12-13 09:36:45.930955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.788 [2024-12-13 09:36:45.936160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.788 [2024-12-13 09:36:45.936184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.788 [2024-12-13 09:36:45.936192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.788 [2024-12-13 09:36:45.941498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.788 [2024-12-13 09:36:45.941520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:45.941529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:45.946785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:45.946807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:45.946815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:45.952016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:45.952038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:45.952046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:45.957283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:45.957305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:45.957313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:45.962511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:45.962533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:45.962541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:45.967810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:45.967832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:45.967840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:45.973055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:45.973076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:45.973084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:45.978322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:45.978343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:45.978354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:45.983566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:45.983587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:45.983595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:45.988784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:45.988805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:45.988813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:45.994015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:45.994037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:45.994044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:45.999269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:45.999290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:45.999298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:46.004559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:46.004580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:46.004587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:46.009852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:46.009873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:46.009881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:46.015140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:46.015161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:46.015169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:46.020416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:46.020438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:46.020446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:46.025707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:46.025731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:46.025740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:46.030993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:46.031014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:46.031022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:46.036228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:46.036249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:46.036257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:46.041549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:46.041570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:46.041577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:46.046829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:46.046850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:46.046859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:46.052090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:46.052111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:46.052119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:46.058203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:46.058224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:46.058232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:46.065376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:46.065398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:46.065406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:46.072641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:46.072662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:46.072670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:46.080617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:46.080639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:46.080647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.789 [2024-12-13 09:36:46.089128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.789 [2024-12-13 09:36:46.089151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.789 [2024-12-13 09:36:46.089159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.790 [2024-12-13 09:36:46.097712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.790 [2024-12-13 09:36:46.097734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.790 [2024-12-13 09:36:46.097742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.790 [2024-12-13 09:36:46.105915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.790 [2024-12-13 09:36:46.105938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.790 [2024-12-13 09:36:46.105946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.790 [2024-12-13 09:36:46.113446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.790 [2024-12-13 09:36:46.113475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.790 [2024-12-13 09:36:46.113483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.790 [2024-12-13 09:36:46.121578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.790 [2024-12-13 09:36:46.121602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.790 [2024-12-13 09:36:46.121610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:33.790 [2024-12-13 09:36:46.129654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.790 [2024-12-13 09:36:46.129688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.790 [2024-12-13 09:36:46.129697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:33.790 [2024-12-13 09:36:46.137830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.790 [2024-12-13 09:36:46.137854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.790 [2024-12-13 09:36:46.137862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:33.790 [2024-12-13 09:36:46.145936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.790 [2024-12-13 09:36:46.145960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.790 [2024-12-13 09:36:46.145972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:33.790 [2024-12-13 09:36:46.153975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:33.790 [2024-12-13 09:36:46.153998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.790 [2024-12-13 09:36:46.154006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.162144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.162167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.162175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.169772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.169795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.169804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.177769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.177791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.177800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.185651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.185673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.185682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.193307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.193330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.193338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.199277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.199299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.199308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.205017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.205039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.205047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.210595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.210621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.210630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.216295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.216316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.216324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.221939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.221961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.221968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.227550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.227571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.227579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.233176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.233197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.233205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.238783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.238804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.238811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.244329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.244350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.244358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.249878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.249899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.249907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.255809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.255831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.255839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.261418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.261440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.050 [2024-12-13 09:36:46.261454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.050 [2024-12-13 09:36:46.266983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.050 [2024-12-13 09:36:46.267005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.267013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.272533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.272555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.272563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.278174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.278196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.278203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.283808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.283830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.283838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.289461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.289482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.289490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.294997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.295017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.295025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.300736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.300758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.300765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.306555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.306576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.306587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.309569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.309590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.309598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.315008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.315029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.315037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.320511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.320534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.320542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.326005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.326026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.326034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.331363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.331385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.331392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.336808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.336830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.336838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.342261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.342283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.342291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.347819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.347840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.347848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.353306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.353328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.353335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.359035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.359056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.359064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.364648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.364669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.364677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.370143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.370164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.370171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.375586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.375607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.375615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.381228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.381249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.381257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.386575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.386596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.386603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.392079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.392100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.392109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.397505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.397529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.397541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.402937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.402960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.402969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.408660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.408682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.408690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.051 [2024-12-13 09:36:46.414999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.051 [2024-12-13 09:36:46.415022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.051 [2024-12-13 09:36:46.415030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.311 [2024-12-13 09:36:46.422590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.311 [2024-12-13 09:36:46.422613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.311 [2024-12-13 09:36:46.422621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.311 [2024-12-13 09:36:46.430370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.311 [2024-12-13 09:36:46.430393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.311 [2024-12-13 09:36:46.430402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.311 [2024-12-13 09:36:46.437466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.311 [2024-12-13 09:36:46.437488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.311 [2024-12-13 09:36:46.437497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.311 [2024-12-13 09:36:46.444319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.311 [2024-12-13 09:36:46.444341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.311 [2024-12-13 09:36:46.444350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.311 [2024-12-13 09:36:46.451457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.311 [2024-12-13 09:36:46.451480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.311 [2024-12-13 09:36:46.451488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.459800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.459827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.459836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.467753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.467776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.467784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.475335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.475358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.475366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.482183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.482205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.482213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.489718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.489742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.489750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.498535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.498558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.498566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.505786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.505809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.505817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.513080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.513102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.513110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.521382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.521404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.521413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.529221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.529244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.529252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.536839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.536862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.536870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.543516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.543538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.543547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.550698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.550721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.550729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.559426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.559452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.559461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.567385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.567408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.567416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.576179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.576202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.576210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.584075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.584099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.584107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.591046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.591068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.591081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.598688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.598711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.598719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.605966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.605988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.605996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.312 [2024-12-13 09:36:46.613023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.613046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.613054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.312 5220.00 IOPS, 652.50 MiB/s [2024-12-13T08:36:46.678Z] [2024-12-13 09:36:46.622131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f766a0) 00:25:34.312 [2024-12-13 09:36:46.622154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.312 [2024-12-13 09:36:46.622162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.312 00:25:34.312 Latency(us) 00:25:34.312 [2024-12-13T08:36:46.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.312 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:34.312 nvme0n1 : 2.00 5215.44 651.93 0.00 0.00 3064.33 491.52 11734.06 00:25:34.312 [2024-12-13T08:36:46.678Z] =================================================================================================================== 00:25:34.312 [2024-12-13T08:36:46.678Z] Total : 5215.44 651.93 0.00 0.00 3064.33 491.52 11734.06 00:25:34.312 { 00:25:34.312 "results": [ 00:25:34.312 { 00:25:34.312 "job": "nvme0n1", 00:25:34.312 "core_mask": "0x2", 00:25:34.312 "workload": "randread", 00:25:34.312 "status": "finished", 00:25:34.312 "queue_depth": 16, 00:25:34.312 "io_size": 131072, 00:25:34.312 "runtime": 2.004818, 00:25:34.312 "iops": 5215.436014640731, 00:25:34.312 "mibps": 651.9295018300913, 00:25:34.312 "io_failed": 0, 00:25:34.312 "io_timeout": 0, 00:25:34.312 "avg_latency_us": 3064.333057893394, 00:25:34.312 "min_latency_us": 491.52, 00:25:34.312 "max_latency_us": 11734.064761904761 00:25:34.312 } 00:25:34.312 ], 00:25:34.312 "core_count": 1 00:25:34.312 } 00:25:34.312 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:34.312 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:34.312 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:34.312 | .driver_specific 00:25:34.312 | .nvme_error 00:25:34.312 | .status_code 00:25:34.312 | .command_transient_transport_error' 00:25:34.312 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:34.572 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 338 > 0 )) 00:25:34.572 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3469098 00:25:34.572 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3469098 ']' 00:25:34.572 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3469098 00:25:34.572 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:34.572 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:34.572 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3469098 00:25:34.572 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:34.572 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:34.572 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3469098' 00:25:34.572 killing process with pid 3469098 00:25:34.572 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3469098 00:25:34.572 Received shutdown signal, test time was about 2.000000 seconds 00:25:34.572 00:25:34.572 Latency(us) 00:25:34.572 [2024-12-13T08:36:46.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.572 [2024-12-13T08:36:46.938Z] =================================================================================================================== 00:25:34.572 [2024-12-13T08:36:46.938Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:34.572 09:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3469098 00:25:34.832 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:34.832 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:34.832 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:34.832 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:34.832 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:34.832 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3469559 00:25:34.832 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3469559 /var/tmp/bperf.sock 00:25:34.832 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:34.832 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3469559 ']' 00:25:34.832 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:34.832 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.832 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:34.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:34.832 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.832 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:34.832 [2024-12-13 09:36:47.084970] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:25:34.832 [2024-12-13 09:36:47.085018] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469559 ] 00:25:34.832 [2024-12-13 09:36:47.148246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.832 [2024-12-13 09:36:47.183892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.091 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.091 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:35.091 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:35.091 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:35.091 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:35.091 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.091 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:35.091 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.091 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:35.091 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:35.659 nvme0n1 00:25:35.659 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:35.659 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.659 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:35.659 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.659 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:35.659 09:36:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:35.659 Running I/O for 2 seconds... 00:25:35.659 [2024-12-13 09:36:47.903481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eebfd0 00:25:35.659 [2024-12-13 09:36:47.904230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.659 [2024-12-13 09:36:47.904260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.659 [2024-12-13 09:36:47.912991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efd208 00:25:35.659 [2024-12-13 09:36:47.913934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.659 [2024-12-13 09:36:47.913958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:35.659 [2024-12-13 09:36:47.922361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef0350 00:25:35.659 [2024-12-13 09:36:47.923084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.659 [2024-12-13 09:36:47.923105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:35.659 [2024-12-13 09:36:47.930985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee1710 00:25:35.659 [2024-12-13 09:36:47.932034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.659 [2024-12-13 09:36:47.932058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:35.659 [2024-12-13 09:36:47.940488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efcdd0 00:25:35.659 [2024-12-13 09:36:47.941646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.659 [2024-12-13 09:36:47.941666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.659 [2024-12-13 09:36:47.949987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef31b8 00:25:35.659 [2024-12-13 09:36:47.951273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.659 [2024-12-13 09:36:47.951292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:35.659 [2024-12-13 09:36:47.958346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef4b08 00:25:35.659 [2024-12-13 09:36:47.959183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.659 [2024-12-13 09:36:47.959203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.659 [2024-12-13 09:36:47.967535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efc998 00:25:35.659 [2024-12-13 09:36:47.968242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.659 [2024-12-13 09:36:47.968261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:35.659 [2024-12-13 09:36:47.976731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ede8a8 00:25:35.659 [2024-12-13 09:36:47.977681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.659 [2024-12-13 09:36:47.977699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:35.659 [2024-12-13 09:36:47.985035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef1868 00:25:35.659 [2024-12-13 09:36:47.986305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.659 [2024-12-13 09:36:47.986323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:35.659 [2024-12-13 09:36:47.994501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef8618 00:25:35.659 [2024-12-13 09:36:47.995656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.659 [2024-12-13 09:36:47.995676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:35.659 [2024-12-13 09:36:48.003928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016edf118 00:25:35.660 [2024-12-13 09:36:48.005232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.660 [2024-12-13 09:36:48.005251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:35.660 [2024-12-13 09:36:48.013370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef96f8 00:25:35.660 [2024-12-13 09:36:48.014773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.660 [2024-12-13 09:36:48.014792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:35.660 [2024-12-13 09:36:48.021815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef1868 00:25:35.660 [2024-12-13 09:36:48.022786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.660 [2024-12-13 09:36:48.022806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:35.919 [2024-12-13 09:36:48.031225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eecc78 00:25:35.919 [2024-12-13 09:36:48.032400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.919 [2024-12-13 09:36:48.032419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:35.919 [2024-12-13 09:36:48.038036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee5658 00:25:35.919 [2024-12-13 09:36:48.038727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.919 [2024-12-13 09:36:48.038746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.919 [2024-12-13 09:36:48.049468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efc560 00:25:35.919 [2024-12-13 09:36:48.050754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.919 [2024-12-13 09:36:48.050773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:35.919 [2024-12-13 09:36:48.059118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef31b8 00:25:35.919 [2024-12-13 09:36:48.060491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.919 [2024-12-13 09:36:48.060510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:35.919 [2024-12-13 09:36:48.068632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee0630 00:25:35.919 [2024-12-13 09:36:48.070134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.919 [2024-12-13 09:36:48.070152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:35.919 [2024-12-13 09:36:48.075130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efcdd0 00:25:35.919 [2024-12-13 09:36:48.075810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.919 [2024-12-13 09:36:48.075829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:35.919 [2024-12-13 09:36:48.084549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef8618 00:25:35.919 [2024-12-13 09:36:48.084980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.919 [2024-12-13 09:36:48.084999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:35.919 [2024-12-13 09:36:48.093011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef0bc0 00:25:35.919 [2024-12-13 09:36:48.093773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.093792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.104194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef81e0 00:25:35.920 [2024-12-13 09:36:48.105535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.105554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.112495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efd640 00:25:35.920 [2024-12-13 09:36:48.113392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.113411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.121638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef8618 00:25:35.920 [2024-12-13 09:36:48.122776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.122795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.131860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eeff18 00:25:35.920 [2024-12-13 09:36:48.133433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.133455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.138193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef81e0 00:25:35.920 [2024-12-13 09:36:48.138964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.138983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.147512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee6fa8 00:25:35.920 [2024-12-13 09:36:48.148026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.148045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.156615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef2510 00:25:35.920 [2024-12-13 09:36:48.157392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.157412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.166111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef7da8 00:25:35.920 [2024-12-13 09:36:48.166767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.166791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.176572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efbcf0 00:25:35.920 [2024-12-13 09:36:48.178003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.178022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.185954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef96f8 00:25:35.920 [2024-12-13 09:36:48.187518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.187536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.192303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee6738 00:25:35.920 [2024-12-13 09:36:48.193058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.193077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.201655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee8d30 00:25:35.920 [2024-12-13 09:36:48.202155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.202174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.211981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ede470 00:25:35.920 [2024-12-13 09:36:48.213291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.213310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.221456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee01f8 00:25:35.920 [2024-12-13 09:36:48.222888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.222906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.230818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee23b8 00:25:35.920 [2024-12-13 09:36:48.232376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.232395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.237278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016edf988 00:25:35.920 [2024-12-13 09:36:48.238039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.238058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.247592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee12d8 00:25:35.920 [2024-12-13 09:36:48.248780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.248799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.256986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef5be8 00:25:35.920 [2024-12-13 09:36:48.258292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.258311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.266400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eed4e8 00:25:35.920 [2024-12-13 09:36:48.267823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.267842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.274779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef0350 00:25:35.920 [2024-12-13 09:36:48.275757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.275776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:35.920 [2024-12-13 09:36:48.283168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eebb98 00:25:35.920 [2024-12-13 09:36:48.284250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.920 [2024-12-13 09:36:48.284268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.292857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eed920 00:25:36.180 [2024-12-13 09:36:48.294046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.294065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.302300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee0a68 00:25:36.180 [2024-12-13 09:36:48.303602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.303621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.311709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016edece0 00:25:36.180 [2024-12-13 09:36:48.313124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.313142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.321086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eeee38 00:25:36.180 [2024-12-13 09:36:48.322625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.322643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.327424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eddc00 00:25:36.180 [2024-12-13 09:36:48.328125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.328144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.335946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eeb760 00:25:36.180 [2024-12-13 09:36:48.336643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.336662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.345323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee0ea0 00:25:36.180 [2024-12-13 09:36:48.346137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.346160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.354715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eeb328 00:25:36.180 [2024-12-13 09:36:48.355666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.355685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.364107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef4b08 00:25:36.180 [2024-12-13 09:36:48.365160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.365179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.373551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef1868 00:25:36.180 [2024-12-13 09:36:48.374746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.374765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.382974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee0ea0 00:25:36.180 [2024-12-13 09:36:48.384266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.384284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.392365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eec840 00:25:36.180 [2024-12-13 09:36:48.393785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.393803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.401892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eef270 00:25:36.180 [2024-12-13 09:36:48.403427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.403445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.408279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efac10 00:25:36.180 [2024-12-13 09:36:48.409019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.409039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.417040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef46d0 00:25:36.180 [2024-12-13 09:36:48.417737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.417757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.426560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee0a68 00:25:36.180 [2024-12-13 09:36:48.427365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.427384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.436665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eed920 00:25:36.180 [2024-12-13 09:36:48.437534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.437553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.445865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eec840 00:25:36.180 [2024-12-13 09:36:48.446585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.446604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.454403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eff3c8 00:25:36.180 [2024-12-13 09:36:48.455767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.455786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.462883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee73e0 00:25:36.180 [2024-12-13 09:36:48.463498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.463518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.472529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efeb58 00:25:36.180 [2024-12-13 09:36:48.473505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.473525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.482014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef6458 00:25:36.180 [2024-12-13 09:36:48.483066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.483088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.491375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eef270 00:25:36.180 [2024-12-13 09:36:48.492548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.492567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.500780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efa3a0 00:25:36.180 [2024-12-13 09:36:48.502071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.502090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.510159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eeaef0 00:25:36.180 [2024-12-13 09:36:48.511585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.511604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.519551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eea680 00:25:36.180 [2024-12-13 09:36:48.521082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.521101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.525877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ede470 00:25:36.180 [2024-12-13 09:36:48.526618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.180 [2024-12-13 09:36:48.526637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:36.180 [2024-12-13 09:36:48.535182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eeb328 00:25:36.181 [2024-12-13 09:36:48.535671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.181 [2024-12-13 09:36:48.535691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:36.181 [2024-12-13 09:36:48.544345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eec408 00:25:36.181 [2024-12-13 09:36:48.545095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.181 [2024-12-13 09:36:48.545115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.553808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef2d80 00:25:36.440 [2024-12-13 09:36:48.554395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.554415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.562888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef81e0 00:25:36.440 [2024-12-13 09:36:48.563731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.563749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.572140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef1430 00:25:36.440 [2024-12-13 09:36:48.572869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.572889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.580680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eed0b0 00:25:36.440 [2024-12-13 09:36:48.581709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.581728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.590393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eed0b0 00:25:36.440 [2024-12-13 09:36:48.591490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.591509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.599389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eed0b0 00:25:36.440 [2024-12-13 09:36:48.600399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.600418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.608344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eed0b0 00:25:36.440 [2024-12-13 09:36:48.609373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.609392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.617384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eed0b0 00:25:36.440 [2024-12-13 09:36:48.618360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.618379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.627660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee5a90 00:25:36.440 [2024-12-13 09:36:48.629105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.629125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.635004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee5658 00:25:36.440 [2024-12-13 09:36:48.635915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.635934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.643850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efe2e8 00:25:36.440 [2024-12-13 09:36:48.645061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.645082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.653749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef0350 00:25:36.440 [2024-12-13 09:36:48.654864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.654883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.662693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef0788 00:25:36.440 [2024-12-13 09:36:48.663957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.663976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.671296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee8088 00:25:36.440 [2024-12-13 09:36:48.672209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.672230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.680312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee1f80 00:25:36.440 [2024-12-13 09:36:48.681198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.681217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.689676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016edf550 00:25:36.440 [2024-12-13 09:36:48.690357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.690377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.698896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef92c0 00:25:36.440 [2024-12-13 09:36:48.699911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.699930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.707176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efbcf0 00:25:36.440 [2024-12-13 09:36:48.708374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.708393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.714854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef57b0 00:25:36.440 [2024-12-13 09:36:48.715478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.715500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.724832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee6b70 00:25:36.440 [2024-12-13 09:36:48.725596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.725616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.733808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef1430 00:25:36.440 [2024-12-13 09:36:48.734560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.734578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.742791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef9f68 00:25:36.440 [2024-12-13 09:36:48.743560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.743579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.751745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee0a68 00:25:36.440 [2024-12-13 09:36:48.752516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.752534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.760711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee3498 00:25:36.440 [2024-12-13 09:36:48.761481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.440 [2024-12-13 09:36:48.761500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:36.440 [2024-12-13 09:36:48.769680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef6458 00:25:36.440 [2024-12-13 09:36:48.770446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.441 [2024-12-13 09:36:48.770468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:36.441 [2024-12-13 09:36:48.778873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eea248 00:25:36.441 [2024-12-13 09:36:48.779414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.441 [2024-12-13 09:36:48.779433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.441 [2024-12-13 09:36:48.788019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef7100 00:25:36.441 [2024-12-13 09:36:48.788879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.441 [2024-12-13 09:36:48.788897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.441 [2024-12-13 09:36:48.796989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee5220 00:25:36.441 [2024-12-13 09:36:48.797877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.441 [2024-12-13 09:36:48.797895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.441 [2024-12-13 09:36:48.806110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef46d0 00:25:36.700 [2024-12-13 09:36:48.807009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.700 [2024-12-13 09:36:48.807027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.700 [2024-12-13 09:36:48.815281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee27f0 00:25:36.700 [2024-12-13 09:36:48.816162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.700 [2024-12-13 09:36:48.816181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.700 [2024-12-13 09:36:48.824581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efc998 00:25:36.700 [2024-12-13 09:36:48.825242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.700 [2024-12-13 09:36:48.825260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:36.700 [2024-12-13 09:36:48.833647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eecc78 00:25:36.700 [2024-12-13 09:36:48.834660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.700 [2024-12-13 09:36:48.834678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:36.700 [2024-12-13 09:36:48.842631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef8a50 00:25:36.700 [2024-12-13 09:36:48.843652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.700 [2024-12-13 09:36:48.843671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:36.700 [2024-12-13 09:36:48.851609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef8618 00:25:36.700 [2024-12-13 09:36:48.852596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.700 [2024-12-13 09:36:48.852615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:36.700 [2024-12-13 09:36:48.860599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef6890 00:25:36.700 [2024-12-13 09:36:48.861577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.700 [2024-12-13 09:36:48.861596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:36.700 [2024-12-13 09:36:48.869567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efd640 00:25:36.701 [2024-12-13 09:36:48.870535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:48.870553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:48.877894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef3a28 00:25:36.701 [2024-12-13 09:36:48.879032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:48.879051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:48.886188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eec408 00:25:36.701 [2024-12-13 09:36:48.886827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:48.886846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:48.895143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef57b0 00:25:36.701 [2024-12-13 09:36:48.896012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:48.896031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:36.701 28232.00 IOPS, 110.28 MiB/s [2024-12-13T08:36:49.067Z] [2024-12-13 09:36:48.904030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee0ea0 00:25:36.701 [2024-12-13 09:36:48.904684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:48.904704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:48.912982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efb480 00:25:36.701 [2024-12-13 09:36:48.913658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:48.913678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:48.922200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee9168 00:25:36.701 [2024-12-13 09:36:48.922877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:48.922898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:48.931273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee84c0 00:25:36.701 [2024-12-13 09:36:48.931931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:48.931950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:48.940257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eeb328 00:25:36.701 [2024-12-13 09:36:48.940888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:48.940907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:48.950369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efc998 00:25:36.701 [2024-12-13 09:36:48.951723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:48.951745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:48.958935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef9b30 00:25:36.701 [2024-12-13 09:36:48.959736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:48.959754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:48.968125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eeea00 00:25:36.701 [2024-12-13 09:36:48.968875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:48.968894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:48.977160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee99d8 00:25:36.701 [2024-12-13 09:36:48.977920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:48.977939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:48.985550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee3498 00:25:36.701 [2024-12-13 09:36:48.986286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:48.986306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:48.994957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016edfdc0 00:25:36.701 [2024-12-13 09:36:48.995755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:48.995774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:49.004403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef7538 00:25:36.701 [2024-12-13 09:36:49.005418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:49.005437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:49.013814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee3060 00:25:36.701 [2024-12-13 09:36:49.014924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:49.014943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:49.023259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efa3a0 00:25:36.701 [2024-12-13 09:36:49.024393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:49.024413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:49.032705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef96f8 00:25:36.701 [2024-12-13 09:36:49.034052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:49.034071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:49.041087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef4b08 00:25:36.701 [2024-12-13 09:36:49.042087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:49.042106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:49.050007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef35f0 00:25:36.701 [2024-12-13 09:36:49.050948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:49.050968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:36.701 [2024-12-13 09:36:49.059039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef35f0 00:25:36.701 [2024-12-13 09:36:49.060034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.701 [2024-12-13 09:36:49.060053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.068258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef35f0 00:25:36.961 [2024-12-13 09:36:49.069247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.069266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.077419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef35f0 00:25:36.961 [2024-12-13 09:36:49.078426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.078444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.086375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef35f0 00:25:36.961 [2024-12-13 09:36:49.087348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.087366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.095338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef35f0 00:25:36.961 [2024-12-13 09:36:49.096325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.096345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.105504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef35f0 00:25:36.961 [2024-12-13 09:36:49.106958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.106978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.114912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef7100 00:25:36.961 [2024-12-13 09:36:49.116478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.116497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.121254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eef6a8 00:25:36.961 [2024-12-13 09:36:49.122009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.122029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.131525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efc998 00:25:36.961 [2024-12-13 09:36:49.132673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.132693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.141038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee95a0 00:25:36.961 [2024-12-13 09:36:49.142315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.142334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.148395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ede038 00:25:36.961 [2024-12-13 09:36:49.149149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.149168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.157437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef0350 00:25:36.961 [2024-12-13 09:36:49.158184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.158202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.168220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef0350 00:25:36.961 [2024-12-13 09:36:49.169549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.169569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.175615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef35f0 00:25:36.961 [2024-12-13 09:36:49.176349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.176368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.185156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ede470 00:25:36.961 [2024-12-13 09:36:49.186030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.186052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.194874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ede470 00:25:36.961 [2024-12-13 09:36:49.195850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.195870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.203933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ede470 00:25:36.961 [2024-12-13 09:36:49.204892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.204911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.212944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ede470 00:25:36.961 [2024-12-13 09:36:49.213875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.961 [2024-12-13 09:36:49.213894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.961 [2024-12-13 09:36:49.223088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ede470 00:25:36.961 [2024-12-13 09:36:49.224408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.962 [2024-12-13 09:36:49.224427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.962 [2024-12-13 09:36:49.232481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef20d8 00:25:36.962 [2024-12-13 09:36:49.233926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.962 [2024-12-13 09:36:49.233944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:36.962 [2024-12-13 09:36:49.239654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef4b08 00:25:36.962 [2024-12-13 09:36:49.240613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.962 [2024-12-13 09:36:49.240632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:36.962 [2024-12-13 09:36:49.249060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef35f0 00:25:36.962 [2024-12-13 09:36:49.250201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.962 [2024-12-13 09:36:49.250220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:36.962 [2024-12-13 09:36:49.258515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eebfd0 00:25:36.962 [2024-12-13 09:36:49.259778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.962 [2024-12-13 09:36:49.259797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:36.962 [2024-12-13 09:36:49.266884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eee190 00:25:36.962 [2024-12-13 09:36:49.267821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.962 [2024-12-13 09:36:49.267839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:36.962 [2024-12-13 09:36:49.275753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016edf550 00:25:36.962 [2024-12-13 09:36:49.276672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.962 [2024-12-13 09:36:49.276690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:36.962 [2024-12-13 09:36:49.284739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef81e0 00:25:36.962 [2024-12-13 09:36:49.285585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.962 [2024-12-13 09:36:49.285604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:36.962 [2024-12-13 09:36:49.293654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eebfd0 00:25:36.962 [2024-12-13 09:36:49.294487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.962 [2024-12-13 09:36:49.294506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:36.962 [2024-12-13 09:36:49.302866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efb048 00:25:36.962 [2024-12-13 09:36:49.303574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.962 [2024-12-13 09:36:49.303593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:36.962 [2024-12-13 09:36:49.311403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee0630 00:25:36.962 [2024-12-13 09:36:49.312654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.962 [2024-12-13 09:36:49.312673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:36.962 [2024-12-13 09:36:49.319734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efeb58 00:25:36.962 [2024-12-13 09:36:49.320427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:36.962 [2024-12-13 09:36:49.320451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.328906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eef6a8 00:25:37.222 [2024-12-13 09:36:49.329624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.329643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.338037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef7970 00:25:37.222 [2024-12-13 09:36:49.338734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.338753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.347026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef2948 00:25:37.222 [2024-12-13 09:36:49.347727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.347746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.355985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee38d0 00:25:37.222 [2024-12-13 09:36:49.356682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.356701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.364957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efe720 00:25:37.222 [2024-12-13 09:36:49.365640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.365660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.373994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016edf988 00:25:37.222 [2024-12-13 09:36:49.374695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.374714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.382972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef2510 00:25:37.222 [2024-12-13 09:36:49.383656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.383675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.391960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef4f40 00:25:37.222 [2024-12-13 09:36:49.392663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.392682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.400939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef7100 00:25:37.222 [2024-12-13 09:36:49.401642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.401661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.410006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ede470 00:25:37.222 [2024-12-13 09:36:49.410701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.410720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.419046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef6cc8 00:25:37.222 [2024-12-13 09:36:49.419742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.419762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.428239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efe2e8 00:25:37.222 [2024-12-13 09:36:49.428962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.428983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.437368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef0350 00:25:37.222 [2024-12-13 09:36:49.438071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.438090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.446547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee1b48 00:25:37.222 [2024-12-13 09:36:49.447245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.447264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.455514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee7c50 00:25:37.222 [2024-12-13 09:36:49.456204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.456223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.464492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016edece0 00:25:37.222 [2024-12-13 09:36:49.465183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.465202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.473488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016edfdc0 00:25:37.222 [2024-12-13 09:36:49.474191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.474211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.482483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee0a68 00:25:37.222 [2024-12-13 09:36:49.483166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.483184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.491456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eefae0 00:25:37.222 [2024-12-13 09:36:49.492128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.492147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.500418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016edf118 00:25:37.222 [2024-12-13 09:36:49.501015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.501038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.509712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee3498 00:25:37.222 [2024-12-13 09:36:49.510499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.510517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.518660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef6020 00:25:37.222 [2024-12-13 09:36:49.519550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.519568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.528641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef9f68 00:25:37.222 [2024-12-13 09:36:49.529688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.222 [2024-12-13 09:36:49.529706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:37.222 [2024-12-13 09:36:49.537629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee5658 00:25:37.222 [2024-12-13 09:36:49.538664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.223 [2024-12-13 09:36:49.538683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:37.223 [2024-12-13 09:36:49.546591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ede8a8 00:25:37.223 [2024-12-13 09:36:49.547644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.223 [2024-12-13 09:36:49.547662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:37.223 [2024-12-13 09:36:49.554846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee4578 00:25:37.223 [2024-12-13 09:36:49.556058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.223 [2024-12-13 09:36:49.556076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:37.223 [2024-12-13 09:36:49.563143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef0788 00:25:37.223 [2024-12-13 09:36:49.563816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.223 [2024-12-13 09:36:49.563834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:37.223 [2024-12-13 09:36:49.572089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef96f8 00:25:37.223 [2024-12-13 09:36:49.572779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.223 [2024-12-13 09:36:49.572797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:37.223 [2024-12-13 09:36:49.581105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efeb58 00:25:37.223 [2024-12-13 09:36:49.581798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.223 [2024-12-13 09:36:49.581817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.590468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efb8b8 00:25:37.483 [2024-12-13 09:36:49.591158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.591178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.599574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eff3c8 00:25:37.483 [2024-12-13 09:36:49.600253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.600272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.608021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef9b30 00:25:37.483 [2024-12-13 09:36:49.608688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.608707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.618064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efcdd0 00:25:37.483 [2024-12-13 09:36:49.618891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.618910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.627047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eec840 00:25:37.483 [2024-12-13 09:36:49.627849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.627868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.636233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef0bc0 00:25:37.483 [2024-12-13 09:36:49.637049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.637070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.645380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee3060 00:25:37.483 [2024-12-13 09:36:49.646216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.646235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.654529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef0ff8 00:25:37.483 [2024-12-13 09:36:49.655328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.655348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.663043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef8a50 00:25:37.483 [2024-12-13 09:36:49.663746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.663766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.672608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016edf550 00:25:37.483 [2024-12-13 09:36:49.673529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.673549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.682296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efcdd0 00:25:37.483 [2024-12-13 09:36:49.683334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.683354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.691835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef31b8 00:25:37.483 [2024-12-13 09:36:49.692971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.692991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.701345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef8a50 00:25:37.483 [2024-12-13 09:36:49.702606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.702625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.710746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee6738 00:25:37.483 [2024-12-13 09:36:49.712120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.712139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.719036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee4578 00:25:37.483 [2024-12-13 09:36:49.719968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.719988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.728121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee3060 00:25:37.483 [2024-12-13 09:36:49.729172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.729191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.735694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efd208 00:25:37.483 [2024-12-13 09:36:49.736133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.736155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.745062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef0788 00:25:37.483 [2024-12-13 09:36:49.745625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.745644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.754194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eeff18 00:25:37.483 [2024-12-13 09:36:49.754998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.755017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.763412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eebb98 00:25:37.483 [2024-12-13 09:36:49.764466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.483 [2024-12-13 09:36:49.764485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:37.483 [2024-12-13 09:36:49.773632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee23b8 00:25:37.484 [2024-12-13 09:36:49.775159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.484 [2024-12-13 09:36:49.775178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:37.484 [2024-12-13 09:36:49.779996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efb8b8 00:25:37.484 [2024-12-13 09:36:49.780678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.484 [2024-12-13 09:36:49.780697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:37.484 [2024-12-13 09:36:49.790232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efd208 00:25:37.484 [2024-12-13 09:36:49.791346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.484 [2024-12-13 09:36:49.791365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:37.484 [2024-12-13 09:36:49.799373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016efef90 00:25:37.484 [2024-12-13 09:36:49.800490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.484 [2024-12-13 09:36:49.800509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:37.484 [2024-12-13 09:36:49.808391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef4b08 00:25:37.484 [2024-12-13 09:36:49.809051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.484 [2024-12-13 09:36:49.809071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:37.484 [2024-12-13 09:36:49.817581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee73e0 00:25:37.484 [2024-12-13 09:36:49.818520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.484 [2024-12-13 09:36:49.818539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:37.484 [2024-12-13 09:36:49.827919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee73e0 00:25:37.484 [2024-12-13 09:36:49.829429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.484 [2024-12-13 09:36:49.829450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:37.484 [2024-12-13 09:36:49.834294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef4b08 00:25:37.484 [2024-12-13 09:36:49.834963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.484 [2024-12-13 09:36:49.834982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:37.484 [2024-12-13 09:36:49.844512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee1b48 00:25:37.484 [2024-12-13 09:36:49.845891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.484 [2024-12-13 09:36:49.845910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:37.743 [2024-12-13 09:36:49.852444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef57b0 00:25:37.743 [2024-12-13 09:36:49.853210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.743 [2024-12-13 09:36:49.853230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:37.743 [2024-12-13 09:36:49.862547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eeee38 00:25:37.743 [2024-12-13 09:36:49.863325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.743 [2024-12-13 09:36:49.863344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:37.743 [2024-12-13 09:36:49.871728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ee3d08 00:25:37.743 [2024-12-13 09:36:49.872369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.743 [2024-12-13 09:36:49.872388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:37.743 [2024-12-13 09:36:49.881096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016ef0350 00:25:37.743 [2024-12-13 09:36:49.881859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.743 [2024-12-13 09:36:49.881878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:37.743 [2024-12-13 09:36:49.889572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eedd58 00:25:37.743 [2024-12-13 09:36:49.890918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.743 [2024-12-13 09:36:49.890936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:37.743 [2024-12-13 09:36:49.897427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb39410) with pdu=0x200016eebb98 00:25:37.743 [2024-12-13 09:36:49.898246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.743 [2024-12-13 09:36:49.898266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:37.743 28244.50 IOPS, 110.33 MiB/s 00:25:37.743 Latency(us) 00:25:37.743 [2024-12-13T08:36:50.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.743 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:37.743 nvme0n1 : 2.00 28272.61 110.44 0.00 0.00 4523.29 2231.34 11359.57 00:25:37.743 [2024-12-13T08:36:50.109Z] =================================================================================================================== 00:25:37.743 [2024-12-13T08:36:50.109Z] Total : 28272.61 110.44 0.00 0.00 4523.29 2231.34 11359.57 00:25:37.743 { 00:25:37.743 "results": [ 00:25:37.743 { 00:25:37.743 "job": "nvme0n1", 00:25:37.743 "core_mask": "0x2", 00:25:37.743 "workload": "randwrite", 00:25:37.743 "status": "finished", 00:25:37.743 "queue_depth": 128, 00:25:37.743 "io_size": 4096, 00:25:37.743 "runtime": 2.002539, 00:25:37.743 "iops": 28272.607924240176, 00:25:37.743 "mibps": 110.43987470406319, 00:25:37.743 "io_failed": 0, 00:25:37.743 "io_timeout": 0, 00:25:37.743 "avg_latency_us": 4523.289641189715, 00:25:37.743 "min_latency_us": 2231.344761904762, 00:25:37.743 "max_latency_us": 11359.573333333334 00:25:37.743 } 00:25:37.743 ], 00:25:37.743 "core_count": 1 00:25:37.743 } 00:25:37.743 09:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:37.743 09:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:37.743 09:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:37.743 | .driver_specific 00:25:37.743 | .nvme_error 00:25:37.743 | .status_code 00:25:37.743 | .command_transient_transport_error' 00:25:37.743 09:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 222 > 0 )) 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3469559 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3469559 ']' 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3469559 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3469559 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3469559' 00:25:38.002 killing process with pid 3469559 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3469559 00:25:38.002 Received shutdown signal, test time was about 2.000000 seconds 00:25:38.002 00:25:38.002 Latency(us) 00:25:38.002 [2024-12-13T08:36:50.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.002 [2024-12-13T08:36:50.368Z] =================================================================================================================== 00:25:38.002 [2024-12-13T08:36:50.368Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3469559 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:38.002 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:38.003 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:38.003 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3470024 00:25:38.003 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3470024 /var/tmp/bperf.sock 00:25:38.003 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:38.003 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3470024 ']' 00:25:38.003 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:38.003 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:38.003 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:38.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:38.003 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:38.003 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:38.262 [2024-12-13 09:36:50.379710] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:25:38.262 [2024-12-13 09:36:50.379758] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470024 ] 00:25:38.262 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:38.262 Zero copy mechanism will not be used. 00:25:38.262 [2024-12-13 09:36:50.444542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.262 [2024-12-13 09:36:50.485091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.262 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.262 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:38.262 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:38.262 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:38.520 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:38.520 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.520 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:38.520 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.520 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:38.520 09:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:38.779 nvme0n1 00:25:39.039 09:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:39.040 09:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.040 09:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:39.040 09:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.040 09:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:39.040 09:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:39.040 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:39.040 Zero copy mechanism will not be used. 00:25:39.040 Running I/O for 2 seconds... 00:25:39.040 [2024-12-13 09:36:51.260249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.260508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.260537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.266076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.266325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.266349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.272495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.272742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.272765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.278820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.279064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.279086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.284458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.284703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.284725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.290064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.290309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.290330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.295207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.295470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.295496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.300062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.300308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.300330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.304500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.304754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.304775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.308887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.309131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.309152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.313251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.313502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.313523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.317799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.318053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.318074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.322272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.322523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.322543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.326681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.326926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.326947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.330964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.331210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.331231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.335327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.335575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.335596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.339637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.339885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.339906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.343951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.344208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.344229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.348237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.348488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.348509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.352542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.352789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.352809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.356854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.357097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.357118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.361169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.361415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.361436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.365679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.365925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.365945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.370067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.370310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.370335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.040 [2024-12-13 09:36:51.374469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.040 [2024-12-13 09:36:51.374715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.040 [2024-12-13 09:36:51.374736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.041 [2024-12-13 09:36:51.378840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.041 [2024-12-13 09:36:51.379087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.041 [2024-12-13 09:36:51.379108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.041 [2024-12-13 09:36:51.383200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.041 [2024-12-13 09:36:51.383445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.041 [2024-12-13 09:36:51.383479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.041 [2024-12-13 09:36:51.387783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.041 [2024-12-13 09:36:51.388027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.041 [2024-12-13 09:36:51.388048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.041 [2024-12-13 09:36:51.392080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.041 [2024-12-13 09:36:51.392325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.041 [2024-12-13 09:36:51.392346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.041 [2024-12-13 09:36:51.396413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.041 [2024-12-13 09:36:51.396687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.041 [2024-12-13 09:36:51.396707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.041 [2024-12-13 09:36:51.400950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.041 [2024-12-13 09:36:51.401197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.041 [2024-12-13 09:36:51.401218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.041 [2024-12-13 09:36:51.405342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.041 [2024-12-13 09:36:51.405605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.041 [2024-12-13 09:36:51.405626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.301 [2024-12-13 09:36:51.409711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.301 [2024-12-13 09:36:51.409963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.301 [2024-12-13 09:36:51.409985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.301 [2024-12-13 09:36:51.414111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.301 [2024-12-13 09:36:51.414354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.301 [2024-12-13 09:36:51.414375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.301 [2024-12-13 09:36:51.418644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.301 [2024-12-13 09:36:51.418894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.301 [2024-12-13 09:36:51.418914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.301 [2024-12-13 09:36:51.423152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.301 [2024-12-13 09:36:51.423400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.301 [2024-12-13 09:36:51.423421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.301 [2024-12-13 09:36:51.427930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.301 [2024-12-13 09:36:51.428176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.301 [2024-12-13 09:36:51.428197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.301 [2024-12-13 09:36:51.433557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.301 [2024-12-13 09:36:51.433802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.301 [2024-12-13 09:36:51.433823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.301 [2024-12-13 09:36:51.439892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.301 [2024-12-13 09:36:51.440142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.301 [2024-12-13 09:36:51.440164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.301 [2024-12-13 09:36:51.446993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.301 [2024-12-13 09:36:51.447251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.301 [2024-12-13 09:36:51.447272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.301 [2024-12-13 09:36:51.453827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.301 [2024-12-13 09:36:51.454071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.301 [2024-12-13 09:36:51.454092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.301 [2024-12-13 09:36:51.460443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.301 [2024-12-13 09:36:51.460857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.301 [2024-12-13 09:36:51.460878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.301 [2024-12-13 09:36:51.467434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.301 [2024-12-13 09:36:51.467686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.301 [2024-12-13 09:36:51.467707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.301 [2024-12-13 09:36:51.474963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.301 [2024-12-13 09:36:51.475198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.475220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.482691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.482947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.482968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.490702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.490945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.490967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.498297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.498547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.498568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.506198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.506298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.506316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.513962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.514207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.514226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.522026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.522287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.522313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.529411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.529676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.529698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.535412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.535662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.535683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.541512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.541773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.541794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.547879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.548124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.548144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.554103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.554348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.554369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.560130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.560373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.560394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.566156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.566407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.566429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.572203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.572473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.572493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.578092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.578338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.578359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.583801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.584014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.584034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.589513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.589719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.589740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.594879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.595085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.595106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.599982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.600185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.600203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.605171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.605378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.605399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.610435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.610647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.610667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.615001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.615206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.615224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.619349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.619559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.619579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.623577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.623781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.623801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.302 [2024-12-13 09:36:51.627739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.302 [2024-12-13 09:36:51.627944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.302 [2024-12-13 09:36:51.627965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.303 [2024-12-13 09:36:51.631944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.303 [2024-12-13 09:36:51.632147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.303 [2024-12-13 09:36:51.632168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.303 [2024-12-13 09:36:51.636089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.303 [2024-12-13 09:36:51.636294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.303 [2024-12-13 09:36:51.636315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.303 [2024-12-13 09:36:51.640235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.303 [2024-12-13 09:36:51.640440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.303 [2024-12-13 09:36:51.640466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.303 [2024-12-13 09:36:51.644832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.303 [2024-12-13 09:36:51.645037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.303 [2024-12-13 09:36:51.645057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.303 [2024-12-13 09:36:51.649702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.303 [2024-12-13 09:36:51.649908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.303 [2024-12-13 09:36:51.649928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.303 [2024-12-13 09:36:51.654701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.303 [2024-12-13 09:36:51.654906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.303 [2024-12-13 09:36:51.654925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.303 [2024-12-13 09:36:51.659675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.303 [2024-12-13 09:36:51.659883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.303 [2024-12-13 09:36:51.659907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.303 [2024-12-13 09:36:51.664934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.303 [2024-12-13 09:36:51.665141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.303 [2024-12-13 09:36:51.665160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.563 [2024-12-13 09:36:51.670681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.563 [2024-12-13 09:36:51.670911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.563 [2024-12-13 09:36:51.670932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.563 [2024-12-13 09:36:51.677421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.563 [2024-12-13 09:36:51.677743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.563 [2024-12-13 09:36:51.677764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.563 [2024-12-13 09:36:51.683770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.563 [2024-12-13 09:36:51.683975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.563 [2024-12-13 09:36:51.683995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.563 [2024-12-13 09:36:51.689516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.563 [2024-12-13 09:36:51.689720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.563 [2024-12-13 09:36:51.689740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.563 [2024-12-13 09:36:51.695006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.563 [2024-12-13 09:36:51.695267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.563 [2024-12-13 09:36:51.695288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.563 [2024-12-13 09:36:51.700687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.563 [2024-12-13 09:36:51.700911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.563 [2024-12-13 09:36:51.700931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.563 [2024-12-13 09:36:51.706354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.563 [2024-12-13 09:36:51.706585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.563 [2024-12-13 09:36:51.706605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.563 [2024-12-13 09:36:51.712116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.563 [2024-12-13 09:36:51.712365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.563 [2024-12-13 09:36:51.712386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.717767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.717982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.718003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.723604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.723820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.723839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.729544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.729801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.729822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.735428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.735641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.735660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.740254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.740466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.740485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.745519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.745744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.745764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.751593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.751857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.751878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.759011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.759223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.759244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.765441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.765757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.765779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.772863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.773161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.773185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.779973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.780226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.780248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.785028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.785234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.785256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.789399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.789613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.789634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.793733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.793939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.793959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.797982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.798190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.798211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.802245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.802456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.802476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.806460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.806664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.806689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.810640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.810842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.810861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.814802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.815009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.815028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.818974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.819179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.819207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.823166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.823370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.823389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.827371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.827588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.827617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.831541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.831746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.831765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.835640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.835844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.835863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.839769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.839974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.840003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.843900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.564 [2024-12-13 09:36:51.844109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.564 [2024-12-13 09:36:51.844137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.564 [2024-12-13 09:36:51.848096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.848300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.848319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.852331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.852557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.852578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.856558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.856766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.856785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.860760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.860968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.860987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.864961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.865173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.865193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.869440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.869652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.869671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.873632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.873835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.873854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.878232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.878437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.878463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.883388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.883611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.883632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.888475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.888679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.888697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.893526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.893732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.893753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.898191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.898393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.898412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.902773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.903017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.903037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.907946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.908151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.908169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.912235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.912439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.912464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.916392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.916599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.916620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.920599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.920802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.920825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.565 [2024-12-13 09:36:51.924798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.565 [2024-12-13 09:36:51.925007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.565 [2024-12-13 09:36:51.925027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.929162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.929369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.929390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.933716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.933924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.933943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.938769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.938979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.938998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.943810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.944014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.944033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.948920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.949123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.949142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.953646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.953853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.953872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.958269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.958478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.958497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.962511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.962723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.962743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.966531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.966736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.966755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.970600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.970807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.970827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.974660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.974864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.974885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.978712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.978917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.978937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.982769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.982974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.983002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.986802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.987006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.987025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.990842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.991045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.991064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.994880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.995085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.995104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:51.998945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:51.999149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.826 [2024-12-13 09:36:51.999168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.826 [2024-12-13 09:36:52.002994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.826 [2024-12-13 09:36:52.003199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.003218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.007012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.007218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.007247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.011358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.011571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.011590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.015764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.015984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.016004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.019913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.020123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.020144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.023991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.024199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.024218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.028098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.028308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.028327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.032192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.032396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.032419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.036276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.036501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.036520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.040344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.040572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.040602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.044418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.044647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.044668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.048574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.048782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.048801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.052701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.052909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.052929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.056841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.057048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.057066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.060900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.061107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.061126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.064943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.065147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.065166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.069015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.069224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.069242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.073495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.073709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.073729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.078604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.078820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.078841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.083845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.084049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.084068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.088474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.088677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.088696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.092951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.093154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.093172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.097465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.097672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.097690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.102039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.102245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.102264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.106606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.106810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.106828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.111128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.111331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.111350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.115593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.115797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.115815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.120429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.827 [2024-12-13 09:36:52.120641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.827 [2024-12-13 09:36:52.120662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.827 [2024-12-13 09:36:52.125331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.828 [2024-12-13 09:36:52.125539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.828 [2024-12-13 09:36:52.125558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.828 [2024-12-13 09:36:52.129879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.828 [2024-12-13 09:36:52.130082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.828 [2024-12-13 09:36:52.130100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.828 [2024-12-13 09:36:52.134320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.828 [2024-12-13 09:36:52.134528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.828 [2024-12-13 09:36:52.134546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.828 [2024-12-13 09:36:52.138814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.828 [2024-12-13 09:36:52.139019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.828 [2024-12-13 09:36:52.139039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.828 [2024-12-13 09:36:52.143346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.828 [2024-12-13 09:36:52.143556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.828 [2024-12-13 09:36:52.143575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.828 [2024-12-13 09:36:52.148272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.828 [2024-12-13 09:36:52.148478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.828 [2024-12-13 09:36:52.148500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.828 [2024-12-13 09:36:52.153297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.828 [2024-12-13 09:36:52.153505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.828 [2024-12-13 09:36:52.153523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.828 [2024-12-13 09:36:52.158447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.828 [2024-12-13 09:36:52.158658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.828 [2024-12-13 09:36:52.158677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.828 [2024-12-13 09:36:52.163310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.828 [2024-12-13 09:36:52.163517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.828 [2024-12-13 09:36:52.163536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.828 [2024-12-13 09:36:52.168203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.828 [2024-12-13 09:36:52.168406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.828 [2024-12-13 09:36:52.168425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.828 [2024-12-13 09:36:52.173108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.828 [2024-12-13 09:36:52.173310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.828 [2024-12-13 09:36:52.173329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.828 [2024-12-13 09:36:52.177421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.828 [2024-12-13 09:36:52.177647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.828 [2024-12-13 09:36:52.177669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.828 [2024-12-13 09:36:52.181675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.828 [2024-12-13 09:36:52.181883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.828 [2024-12-13 09:36:52.181901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.828 [2024-12-13 09:36:52.185917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.828 [2024-12-13 09:36:52.186120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.828 [2024-12-13 09:36:52.186139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.828 [2024-12-13 09:36:52.190179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:39.828 [2024-12-13 09:36:52.190387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.828 [2024-12-13 09:36:52.190408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.194404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.194613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.194634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.198792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.198998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.199018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.203343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.203550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.203569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.207426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.207638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.207657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.211527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.211732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.211752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.215684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.215887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.215905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.219770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.219974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.220004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.223894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.224098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.224120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.227989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.228192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.228211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.232095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.232299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.232319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.236213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.236417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.236436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.240327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.240534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.240553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.244419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.244629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.244650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.248544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.248748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.248766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.089 6304.00 IOPS, 788.00 MiB/s [2024-12-13T08:36:52.455Z] [2024-12-13 09:36:52.253540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.253736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.253755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.257592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.257783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.257802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.261819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.262014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.262033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.266313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.266529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.266551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.270575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.270788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.270815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.275144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.275339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.275359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.280291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.280492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.280512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.285440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.285646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.285665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.290952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.291146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.291164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.296223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.296420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.296440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.300685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.300876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.300895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.304989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.305181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.089 [2024-12-13 09:36:52.305200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.089 [2024-12-13 09:36:52.309189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.089 [2024-12-13 09:36:52.309378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.309397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.313661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.313853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.313872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.318057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.318250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.318270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.322559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.322750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.322769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.326994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.327184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.327202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.331413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.331611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.331630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.335894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.336088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.336107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.340403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.340603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.340639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.344866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.345058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.345077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.349176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.349362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.349381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.353393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.353588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.353607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.357616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.357803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.357822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.361960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.362148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.362166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.366801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.366987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.367006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.371807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.371993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.372011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.376854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.377040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.377059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.381774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.381979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.381997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.386367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.386573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.386592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.391270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.391462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.391482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.396110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.396297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.396317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.401244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.401429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.401456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.405911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.406097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.406115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.411088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.411282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.411301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.416096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.416283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.416302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.421181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.421377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.421396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.425867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.426059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.426078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.430470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.430663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.430681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.435402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.435598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.090 [2024-12-13 09:36:52.435617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.090 [2024-12-13 09:36:52.440766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.090 [2024-12-13 09:36:52.440962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.091 [2024-12-13 09:36:52.440981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.091 [2024-12-13 09:36:52.445908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.091 [2024-12-13 09:36:52.446100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.091 [2024-12-13 09:36:52.446118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.091 [2024-12-13 09:36:52.450878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.091 [2024-12-13 09:36:52.451074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.091 [2024-12-13 09:36:52.451093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.456289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.456487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.456505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.461273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.461477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.461496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.466148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.466339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.466361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.471325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.471520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.471539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.476337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.476531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.476551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.481557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.481749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.481768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.485982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.486174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.486193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.490259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.490447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.490472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.494513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.494708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.494726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.498891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.499084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.499102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.503454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.503648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.503666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.507918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.508112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.508131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.512165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.512358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.512377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.516822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.517014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.517032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.521921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.522120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.522139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.527087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.527294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.527313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.532753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.532947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.532967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.352 [2024-12-13 09:36:52.537632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.352 [2024-12-13 09:36:52.537824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.352 [2024-12-13 09:36:52.537843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.542216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.542406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.542425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.546600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.546793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.546812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.550854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.551053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.551072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.555040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.555230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.555249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.559150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.559340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.559359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.563295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.563491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.563510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.567608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.567800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.567818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.572210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.572403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.572421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.576304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.576501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.576522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.580400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.580594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.580613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.584735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.584927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.584949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.589170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.589368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.589387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.593949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.594143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.594162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.599072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.599264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.599283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.604569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.604761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.604780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.609663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.609853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.609873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.614271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.614469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.614487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.618686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.618875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.618894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.622834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.623026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.623044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.627274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.627481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.627500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.631893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.632087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.632105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.636511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.636702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.636721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.641023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.641213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.641231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.645611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.645805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.645824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.650321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.650518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.650536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.654849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.655039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.655058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.659289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.659484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.353 [2024-12-13 09:36:52.659501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.353 [2024-12-13 09:36:52.663520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.353 [2024-12-13 09:36:52.663715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.354 [2024-12-13 09:36:52.663734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.354 [2024-12-13 09:36:52.667838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.354 [2024-12-13 09:36:52.668030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.354 [2024-12-13 09:36:52.668049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.354 [2024-12-13 09:36:52.672216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.354 [2024-12-13 09:36:52.672407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.354 [2024-12-13 09:36:52.672426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.354 [2024-12-13 09:36:52.677603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.354 [2024-12-13 09:36:52.677792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.354 [2024-12-13 09:36:52.677811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.354 [2024-12-13 09:36:52.682503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.354 [2024-12-13 09:36:52.682699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.354 [2024-12-13 09:36:52.682717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.354 [2024-12-13 09:36:52.687762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.354 [2024-12-13 09:36:52.687954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.354 [2024-12-13 09:36:52.687973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.354 [2024-12-13 09:36:52.692658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.354 [2024-12-13 09:36:52.692850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.354 [2024-12-13 09:36:52.692869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.354 [2024-12-13 09:36:52.698165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.354 [2024-12-13 09:36:52.698362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.354 [2024-12-13 09:36:52.698381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.354 [2024-12-13 09:36:52.703188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.354 [2024-12-13 09:36:52.703379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.354 [2024-12-13 09:36:52.703398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.354 [2024-12-13 09:36:52.707543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.354 [2024-12-13 09:36:52.707736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.354 [2024-12-13 09:36:52.707762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.354 [2024-12-13 09:36:52.711891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.354 [2024-12-13 09:36:52.712088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.354 [2024-12-13 09:36:52.712107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.354 [2024-12-13 09:36:52.716730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.354 [2024-12-13 09:36:52.716974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.354 [2024-12-13 09:36:52.716995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.615 [2024-12-13 09:36:52.721532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.615 [2024-12-13 09:36:52.721723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.615 [2024-12-13 09:36:52.721742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.615 [2024-12-13 09:36:52.725758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.615 [2024-12-13 09:36:52.725952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.615 [2024-12-13 09:36:52.725971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.615 [2024-12-13 09:36:52.729919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.615 [2024-12-13 09:36:52.730111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.615 [2024-12-13 09:36:52.730130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.615 [2024-12-13 09:36:52.734031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.615 [2024-12-13 09:36:52.734222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.615 [2024-12-13 09:36:52.734241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.615 [2024-12-13 09:36:52.738168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.615 [2024-12-13 09:36:52.738360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.615 [2024-12-13 09:36:52.738379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.615 [2024-12-13 09:36:52.742441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.615 [2024-12-13 09:36:52.742641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.615 [2024-12-13 09:36:52.742660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.615 [2024-12-13 09:36:52.746918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.615 [2024-12-13 09:36:52.747110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.615 [2024-12-13 09:36:52.747128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.615 [2024-12-13 09:36:52.751066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.615 [2024-12-13 09:36:52.751256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.615 [2024-12-13 09:36:52.751274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.615 [2024-12-13 09:36:52.755136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.615 [2024-12-13 09:36:52.755327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.615 [2024-12-13 09:36:52.755347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.615 [2024-12-13 09:36:52.759268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.615 [2024-12-13 09:36:52.759470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.615 [2024-12-13 09:36:52.759490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.615 [2024-12-13 09:36:52.763380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.615 [2024-12-13 09:36:52.763579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.615 [2024-12-13 09:36:52.763598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.615 [2024-12-13 09:36:52.767494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.615 [2024-12-13 09:36:52.767687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.615 [2024-12-13 09:36:52.767705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.615 [2024-12-13 09:36:52.771674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.615 [2024-12-13 09:36:52.771867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.615 [2024-12-13 09:36:52.771887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.615 [2024-12-13 09:36:52.775814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.615 [2024-12-13 09:36:52.776009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.615 [2024-12-13 09:36:52.776029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.779980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.780176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.780199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.784129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.784324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.784344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.788334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.788529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.788547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.792849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.793038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.793057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.797112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.797305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.797324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.801594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.801784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.801803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.806547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.806739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.806758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.811535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.811732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.811751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.816957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.817149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.817168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.821465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.821661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.821680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.825817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.826010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.826029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.830243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.830434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.830458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.834322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.834517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.834536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.838425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.838621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.838640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.842515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.842710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.842728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.846629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.846820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.846839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.850687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.850878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.850897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.854790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.854983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.855002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.859211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.859403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.859424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.863506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.863695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.863713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.867617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.867808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.867829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.871764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.871954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.871974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.875879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.876074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.876094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.879988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.880178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.880198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.884055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.884247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.884266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.888152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.888345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.888364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.892677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.892870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.616 [2024-12-13 09:36:52.892892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.616 [2024-12-13 09:36:52.897420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.616 [2024-12-13 09:36:52.897641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.897662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.902536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.902731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.902750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.907370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.907569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.907587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.912460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.912658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.912676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.917676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.917866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.917884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.922650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.922839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.922858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.928092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.928328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.928349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.933757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.933948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.933966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.939665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.939865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.939883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.944559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.944750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.944769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.949641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.949843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.949862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.954777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.954967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.954986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.959829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.960021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.960039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.965306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.965502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.965520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.970067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.970259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.970277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.974698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.974890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.974909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.617 [2024-12-13 09:36:52.979174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.617 [2024-12-13 09:36:52.979367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.617 [2024-12-13 09:36:52.979386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.877 [2024-12-13 09:36:52.983445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.877 [2024-12-13 09:36:52.983645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.877 [2024-12-13 09:36:52.983664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.877 [2024-12-13 09:36:52.987905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.877 [2024-12-13 09:36:52.988100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.877 [2024-12-13 09:36:52.988120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.877 [2024-12-13 09:36:52.992544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.877 [2024-12-13 09:36:52.992739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.877 [2024-12-13 09:36:52.992758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.877 [2024-12-13 09:36:52.996755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.877 [2024-12-13 09:36:52.996948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.877 [2024-12-13 09:36:52.996967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.877 [2024-12-13 09:36:53.000946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.877 [2024-12-13 09:36:53.001141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.877 [2024-12-13 09:36:53.001161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.877 [2024-12-13 09:36:53.005117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.005308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.005327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.009263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.009462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.009481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.013422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.013620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.013639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.017711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.017911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.017934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.021969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.022167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.022188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.026185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.026383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.026405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.030412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.030613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.030633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.034661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.034858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.034877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.038858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.039056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.039077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.042995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.043192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.043212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.047166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.047363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.047382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.051309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.051510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.051528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.055549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.055750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.055770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.060183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.060381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.060400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.064462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.064654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.064675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.069037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.069257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.069278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.073519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.073710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.073729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.077974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.078166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.078186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.083048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.083242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.083261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.088888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.089133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.089153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.094372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.094570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.094590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.098827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.099018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.099037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.103334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.103530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.103549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.107709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.107903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.107922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.112031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.112273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.112293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.117355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.117595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.117616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.123495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.123756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.123776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.878 [2024-12-13 09:36:53.130069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.878 [2024-12-13 09:36:53.130373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.878 [2024-12-13 09:36:53.130394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.136023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.136255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.136274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.140269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.140470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.140493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.144503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.144700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.144722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.149354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.149557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.149576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.153932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.154166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.154188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.159154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.159352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.159371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.163652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.163848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.163870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.167907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.168102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.168121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.172122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.172318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.172337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.176300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.176502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.176522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.180538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.180740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.180759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.184699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.184893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.184912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.189033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.189227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.189247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.193457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.193651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.193670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.197646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.197840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.197858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.201817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.202010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.202029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.206016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.206206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.206226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.210176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.210366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.210385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.214352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.214570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.214589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.218573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.218771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.218791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.222723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.222917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.222936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.226871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.227063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.227081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.230990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.231184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.231202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.235168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.235365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.235396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:40.879 [2024-12-13 09:36:53.239561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:40.879 [2024-12-13 09:36:53.239758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.879 [2024-12-13 09:36:53.239777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:41.140 [2024-12-13 09:36:53.244931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:41.140 [2024-12-13 09:36:53.245230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.140 [2024-12-13 09:36:53.245251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:41.140 [2024-12-13 09:36:53.250744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:41.140 [2024-12-13 09:36:53.250950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.140 [2024-12-13 09:36:53.250969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:41.140 6529.00 IOPS, 816.12 MiB/s [2024-12-13T08:36:53.506Z] [2024-12-13 09:36:53.256610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb398f0) with pdu=0x200016efef90 00:25:41.140 [2024-12-13 09:36:53.256808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.140 [2024-12-13 09:36:53.256831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:41.140 00:25:41.140 Latency(us) 00:25:41.140 [2024-12-13T08:36:53.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.140 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:41.140 nvme0n1 : 2.00 6526.24 815.78 0.00 0.00 2447.48 1763.23 13981.01 00:25:41.140 [2024-12-13T08:36:53.506Z] =================================================================================================================== 00:25:41.140 [2024-12-13T08:36:53.506Z] Total : 6526.24 815.78 0.00 0.00 2447.48 1763.23 13981.01 00:25:41.140 { 00:25:41.140 "results": [ 00:25:41.140 { 00:25:41.140 "job": "nvme0n1", 00:25:41.140 "core_mask": "0x2", 00:25:41.140 "workload": "randwrite", 00:25:41.140 "status": "finished", 00:25:41.140 "queue_depth": 16, 00:25:41.140 "io_size": 131072, 00:25:41.140 "runtime": 2.00391, 00:25:41.140 "iops": 6526.241198457016, 00:25:41.140 "mibps": 815.780149807127, 00:25:41.140 "io_failed": 0, 00:25:41.140 "io_timeout": 0, 00:25:41.140 "avg_latency_us": 2447.483116538862, 00:25:41.140 "min_latency_us": 1763.230476190476, 00:25:41.140 "max_latency_us": 13981.013333333334 00:25:41.140 } 00:25:41.140 ], 00:25:41.140 "core_count": 1 00:25:41.140 } 00:25:41.140 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:41.140 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:41.140 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:41.140 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:41.140 | .driver_specific 00:25:41.140 | .nvme_error 00:25:41.140 | .status_code 00:25:41.140 | .command_transient_transport_error' 00:25:41.140 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 422 > 0 )) 00:25:41.140 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3470024 00:25:41.140 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3470024 ']' 00:25:41.140 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3470024 00:25:41.140 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:41.140 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.141 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3470024 00:25:41.400 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:41.400 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:41.400 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3470024' 00:25:41.400 killing process with pid 3470024 00:25:41.400 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3470024 00:25:41.400 Received shutdown signal, test time was about 2.000000 seconds 00:25:41.400 00:25:41.400 Latency(us) 00:25:41.400 [2024-12-13T08:36:53.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.400 [2024-12-13T08:36:53.766Z] =================================================================================================================== 00:25:41.400 [2024-12-13T08:36:53.766Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:41.400 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3470024 00:25:41.400 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3468401 00:25:41.401 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3468401 ']' 00:25:41.401 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3468401 00:25:41.401 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:41.401 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.401 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3468401 00:25:41.401 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:41.401 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:41.401 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3468401' 00:25:41.401 killing process with pid 3468401 00:25:41.401 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3468401 00:25:41.401 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3468401 00:25:41.659 00:25:41.659 real 0m13.549s 00:25:41.659 user 0m25.863s 00:25:41.659 sys 0m4.435s 00:25:41.659 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:41.659 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:41.659 ************************************ 00:25:41.659 END TEST nvmf_digest_error 00:25:41.659 ************************************ 00:25:41.659 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:41.659 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:41.659 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:41.659 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:41.659 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:41.660 rmmod nvme_tcp 00:25:41.660 rmmod nvme_fabrics 00:25:41.660 rmmod nvme_keyring 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3468401 ']' 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3468401 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3468401 ']' 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3468401 00:25:41.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3468401) - No such process 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3468401 is not found' 00:25:41.660 Process with pid 3468401 is not found 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.660 09:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:44.194 00:25:44.194 real 0m35.191s 00:25:44.194 user 0m53.971s 00:25:44.194 sys 0m13.063s 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:44.194 ************************************ 00:25:44.194 END TEST nvmf_digest 00:25:44.194 ************************************ 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.194 ************************************ 00:25:44.194 START TEST nvmf_bdevperf 00:25:44.194 ************************************ 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:44.194 * Looking for test storage... 00:25:44.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:44.194 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:44.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.195 --rc genhtml_branch_coverage=1 00:25:44.195 --rc genhtml_function_coverage=1 00:25:44.195 --rc genhtml_legend=1 00:25:44.195 --rc geninfo_all_blocks=1 00:25:44.195 --rc geninfo_unexecuted_blocks=1 00:25:44.195 00:25:44.195 ' 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:44.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.195 --rc genhtml_branch_coverage=1 00:25:44.195 --rc genhtml_function_coverage=1 00:25:44.195 --rc genhtml_legend=1 00:25:44.195 --rc geninfo_all_blocks=1 00:25:44.195 --rc geninfo_unexecuted_blocks=1 00:25:44.195 00:25:44.195 ' 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:44.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.195 --rc genhtml_branch_coverage=1 00:25:44.195 --rc genhtml_function_coverage=1 00:25:44.195 --rc genhtml_legend=1 00:25:44.195 --rc geninfo_all_blocks=1 00:25:44.195 --rc geninfo_unexecuted_blocks=1 00:25:44.195 00:25:44.195 ' 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:44.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.195 --rc genhtml_branch_coverage=1 00:25:44.195 --rc genhtml_function_coverage=1 00:25:44.195 --rc genhtml_legend=1 00:25:44.195 --rc geninfo_all_blocks=1 00:25:44.195 --rc geninfo_unexecuted_blocks=1 00:25:44.195 00:25:44.195 ' 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:44.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:44.195 09:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.554 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:49.555 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:49.555 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:49.555 Found net devices under 0000:af:00.0: cvl_0_0 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:49.555 Found net devices under 0000:af:00.1: cvl_0_1 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:49.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:25:49.555 00:25:49.555 --- 10.0.0.2 ping statistics --- 00:25:49.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.555 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:25:49.555 00:25:49.555 --- 10.0.0.1 ping statistics --- 00:25:49.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.555 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3474030 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3474030 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3474030 ']' 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:49.555 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:49.555 [2024-12-13 09:37:01.713723] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:25:49.555 [2024-12-13 09:37:01.713786] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.556 [2024-12-13 09:37:01.779104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:49.556 [2024-12-13 09:37:01.818003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.556 [2024-12-13 09:37:01.818042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.556 [2024-12-13 09:37:01.818049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.556 [2024-12-13 09:37:01.818056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.556 [2024-12-13 09:37:01.818061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.556 [2024-12-13 09:37:01.819289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.556 [2024-12-13 09:37:01.819357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.556 [2024-12-13 09:37:01.819358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.815 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:49.815 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:49.815 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:49.815 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:49.815 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:49.815 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.815 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:49.815 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.815 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:49.815 [2024-12-13 09:37:01.963935] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.815 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.815 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:49.815 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.815 09:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:49.815 Malloc0 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:49.815 [2024-12-13 09:37:02.031551] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:49.815 { 00:25:49.815 "params": { 00:25:49.815 "name": "Nvme$subsystem", 00:25:49.815 "trtype": "$TEST_TRANSPORT", 00:25:49.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.815 "adrfam": "ipv4", 00:25:49.815 "trsvcid": "$NVMF_PORT", 00:25:49.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.815 "hdgst": ${hdgst:-false}, 00:25:49.815 "ddgst": ${ddgst:-false} 00:25:49.815 }, 00:25:49.815 "method": "bdev_nvme_attach_controller" 00:25:49.815 } 00:25:49.815 EOF 00:25:49.815 )") 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:49.815 09:37:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:49.815 "params": { 00:25:49.815 "name": "Nvme1", 00:25:49.815 "trtype": "tcp", 00:25:49.815 "traddr": "10.0.0.2", 00:25:49.815 "adrfam": "ipv4", 00:25:49.815 "trsvcid": "4420", 00:25:49.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.815 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:49.815 "hdgst": false, 00:25:49.815 "ddgst": false 00:25:49.815 }, 00:25:49.815 "method": "bdev_nvme_attach_controller" 00:25:49.815 }' 00:25:49.815 [2024-12-13 09:37:02.082772] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:25:49.816 [2024-12-13 09:37:02.082816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474189 ] 00:25:49.816 [2024-12-13 09:37:02.146887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.074 [2024-12-13 09:37:02.188706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.074 Running I/O for 1 seconds... 00:25:51.010 11363.00 IOPS, 44.39 MiB/s 00:25:51.010 Latency(us) 00:25:51.010 [2024-12-13T08:37:03.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.010 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:51.010 Verification LBA range: start 0x0 length 0x4000 00:25:51.010 Nvme1n1 : 1.01 11413.00 44.58 0.00 0.00 11171.91 2293.76 13107.20 00:25:51.010 [2024-12-13T08:37:03.376Z] =================================================================================================================== 00:25:51.010 [2024-12-13T08:37:03.376Z] Total : 11413.00 44.58 0.00 0.00 11171.91 2293.76 13107.20 00:25:51.269 09:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3474417 00:25:51.269 09:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:51.269 09:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:51.269 09:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:51.269 09:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:51.269 09:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:51.269 09:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:51.269 09:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:51.269 { 00:25:51.269 "params": { 00:25:51.269 "name": "Nvme$subsystem", 00:25:51.269 "trtype": "$TEST_TRANSPORT", 00:25:51.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:51.269 "adrfam": "ipv4", 00:25:51.269 "trsvcid": "$NVMF_PORT", 00:25:51.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:51.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:51.269 "hdgst": ${hdgst:-false}, 00:25:51.269 "ddgst": ${ddgst:-false} 00:25:51.269 }, 00:25:51.269 "method": "bdev_nvme_attach_controller" 00:25:51.269 } 00:25:51.269 EOF 00:25:51.269 )") 00:25:51.269 09:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:51.269 09:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:51.269 09:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:51.269 09:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:51.269 "params": { 00:25:51.269 "name": "Nvme1", 00:25:51.269 "trtype": "tcp", 00:25:51.269 "traddr": "10.0.0.2", 00:25:51.269 "adrfam": "ipv4", 00:25:51.269 "trsvcid": "4420", 00:25:51.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:51.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:51.269 "hdgst": false, 00:25:51.269 "ddgst": false 00:25:51.269 }, 00:25:51.269 "method": "bdev_nvme_attach_controller" 00:25:51.269 }' 00:25:51.269 [2024-12-13 09:37:03.566209] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:25:51.269 [2024-12-13 09:37:03.566257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474417 ] 00:25:51.269 [2024-12-13 09:37:03.627949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.528 [2024-12-13 09:37:03.666116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.528 Running I/O for 15 seconds... 00:25:53.834 11238.00 IOPS, 43.90 MiB/s [2024-12-13T08:37:06.771Z] 11367.00 IOPS, 44.40 MiB/s [2024-12-13T08:37:06.771Z] 09:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3474030 00:25:54.405 09:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:54.405 [2024-12-13 09:37:06.542027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.405 [2024-12-13 09:37:06.542064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.405 [2024-12-13 09:37:06.542090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.405 [2024-12-13 09:37:06.542114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.405 [2024-12-13 09:37:06.542133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.405 [2024-12-13 09:37:06.542149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.405 [2024-12-13 09:37:06.542168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.405 [2024-12-13 09:37:06.542702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.405 [2024-12-13 09:37:06.542711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.542728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.542746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.542764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.542779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.542793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.542812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.542829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.542845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.542864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.542878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.542895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.542915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.542935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.542958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.542981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.542993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.543001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.543019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.543039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.543060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.406 [2024-12-13 09:37:06.543080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.406 [2024-12-13 09:37:06.543428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.406 [2024-12-13 09:37:06.543436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.407 [2024-12-13 09:37:06.543442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.407 [2024-12-13 09:37:06.543462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.407 [2024-12-13 09:37:06.543477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.407 [2024-12-13 09:37:06.543492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.407 [2024-12-13 09:37:06.543507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.407 [2024-12-13 09:37:06.543521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.407 [2024-12-13 09:37:06.543535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.407 [2024-12-13 09:37:06.543550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.407 [2024-12-13 09:37:06.543564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.407 [2024-12-13 09:37:06.543579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.407 [2024-12-13 09:37:06.543593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.543993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.543999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.544007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.544013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.544021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.544028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.407 [2024-12-13 09:37:06.544036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.407 [2024-12-13 09:37:06.544042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.408 [2024-12-13 09:37:06.544050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.408 [2024-12-13 09:37:06.544056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.408 [2024-12-13 09:37:06.544064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.408 [2024-12-13 09:37:06.544071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.408 [2024-12-13 09:37:06.544079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.408 [2024-12-13 09:37:06.544085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.408 [2024-12-13 09:37:06.544093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.408 [2024-12-13 09:37:06.544099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.408 [2024-12-13 09:37:06.544108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.408 [2024-12-13 09:37:06.544114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.408 [2024-12-13 09:37:06.544122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.408 [2024-12-13 09:37:06.544128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.408 [2024-12-13 09:37:06.544136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.408 [2024-12-13 09:37:06.544144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.408 [2024-12-13 09:37:06.544152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.408 [2024-12-13 09:37:06.544158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.408 [2024-12-13 09:37:06.544165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af3510 is same with the state(6) to be set 00:25:54.408 [2024-12-13 09:37:06.544175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.408 [2024-12-13 09:37:06.544181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.408 [2024-12-13 09:37:06.544187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111072 len:8 PRP1 0x0 PRP2 0x0 00:25:54.408 [2024-12-13 09:37:06.544193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.408 [2024-12-13 09:37:06.547018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.408 [2024-12-13 09:37:06.547071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.408 [2024-12-13 09:37:06.547614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-12-13 09:37:06.547630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.408 [2024-12-13 09:37:06.547638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.408 [2024-12-13 09:37:06.547811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.408 [2024-12-13 09:37:06.547985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.408 [2024-12-13 09:37:06.547993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.408 [2024-12-13 09:37:06.548001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.408 [2024-12-13 09:37:06.548009] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.408 [2024-12-13 09:37:06.560141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.408 [2024-12-13 09:37:06.560577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-12-13 09:37:06.560594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.408 [2024-12-13 09:37:06.560602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.408 [2024-12-13 09:37:06.560773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.408 [2024-12-13 09:37:06.560932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.408 [2024-12-13 09:37:06.560940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.408 [2024-12-13 09:37:06.560947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.408 [2024-12-13 09:37:06.560953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.408 [2024-12-13 09:37:06.573022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.408 [2024-12-13 09:37:06.573418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-12-13 09:37:06.573435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.408 [2024-12-13 09:37:06.573442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.408 [2024-12-13 09:37:06.573616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.408 [2024-12-13 09:37:06.573786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.408 [2024-12-13 09:37:06.573794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.408 [2024-12-13 09:37:06.573804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.408 [2024-12-13 09:37:06.573811] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.408 [2024-12-13 09:37:06.585806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.408 [2024-12-13 09:37:06.586287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-12-13 09:37:06.586335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.408 [2024-12-13 09:37:06.586359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.408 [2024-12-13 09:37:06.586874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.408 [2024-12-13 09:37:06.587043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.408 [2024-12-13 09:37:06.587051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.408 [2024-12-13 09:37:06.587058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.408 [2024-12-13 09:37:06.587065] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.408 [2024-12-13 09:37:06.598608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.408 [2024-12-13 09:37:06.599013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-12-13 09:37:06.599029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.408 [2024-12-13 09:37:06.599036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.408 [2024-12-13 09:37:06.599223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.408 [2024-12-13 09:37:06.599396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.408 [2024-12-13 09:37:06.599404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.408 [2024-12-13 09:37:06.599411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.408 [2024-12-13 09:37:06.599417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.408 [2024-12-13 09:37:06.611376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.408 [2024-12-13 09:37:06.611743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-12-13 09:37:06.611794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.408 [2024-12-13 09:37:06.611817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.408 [2024-12-13 09:37:06.612399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.408 [2024-12-13 09:37:06.612933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.408 [2024-12-13 09:37:06.612942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.408 [2024-12-13 09:37:06.612948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.408 [2024-12-13 09:37:06.612955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.408 [2024-12-13 09:37:06.624198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.408 [2024-12-13 09:37:06.624618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-12-13 09:37:06.624634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.408 [2024-12-13 09:37:06.624641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.408 [2024-12-13 09:37:06.624809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.408 [2024-12-13 09:37:06.624981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.408 [2024-12-13 09:37:06.624989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.408 [2024-12-13 09:37:06.624995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.408 [2024-12-13 09:37:06.625000] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.408 [2024-12-13 09:37:06.637091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.408 [2024-12-13 09:37:06.637513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-12-13 09:37:06.637530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.408 [2024-12-13 09:37:06.637537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.409 [2024-12-13 09:37:06.637711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.409 [2024-12-13 09:37:06.637870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.409 [2024-12-13 09:37:06.637877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.409 [2024-12-13 09:37:06.637883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.409 [2024-12-13 09:37:06.637889] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.409 [2024-12-13 09:37:06.649885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.409 [2024-12-13 09:37:06.650287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-12-13 09:37:06.650331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.409 [2024-12-13 09:37:06.650354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.409 [2024-12-13 09:37:06.650876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.409 [2024-12-13 09:37:06.651045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.409 [2024-12-13 09:37:06.651053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.409 [2024-12-13 09:37:06.651059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.409 [2024-12-13 09:37:06.651065] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.409 [2024-12-13 09:37:06.662655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.409 [2024-12-13 09:37:06.663097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-12-13 09:37:06.663135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.409 [2024-12-13 09:37:06.663168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.409 [2024-12-13 09:37:06.663768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.409 [2024-12-13 09:37:06.664001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.409 [2024-12-13 09:37:06.664009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.409 [2024-12-13 09:37:06.664016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.409 [2024-12-13 09:37:06.664022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.409 [2024-12-13 09:37:06.675418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.409 [2024-12-13 09:37:06.675814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-12-13 09:37:06.675830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.409 [2024-12-13 09:37:06.675837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.409 [2024-12-13 09:37:06.675995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.409 [2024-12-13 09:37:06.676154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.409 [2024-12-13 09:37:06.676162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.409 [2024-12-13 09:37:06.676168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.409 [2024-12-13 09:37:06.676174] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.409 [2024-12-13 09:37:06.688266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.409 [2024-12-13 09:37:06.688605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-12-13 09:37:06.688622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.409 [2024-12-13 09:37:06.688629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.409 [2024-12-13 09:37:06.688796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.409 [2024-12-13 09:37:06.688964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.409 [2024-12-13 09:37:06.688972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.409 [2024-12-13 09:37:06.688978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.409 [2024-12-13 09:37:06.688984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.409 [2024-12-13 09:37:06.700999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.409 [2024-12-13 09:37:06.701418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-12-13 09:37:06.701475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.409 [2024-12-13 09:37:06.701499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.409 [2024-12-13 09:37:06.702013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.409 [2024-12-13 09:37:06.702184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.409 [2024-12-13 09:37:06.702192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.409 [2024-12-13 09:37:06.702199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.409 [2024-12-13 09:37:06.702205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.409 [2024-12-13 09:37:06.713844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.409 [2024-12-13 09:37:06.714260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-12-13 09:37:06.714305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.409 [2024-12-13 09:37:06.714327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.409 [2024-12-13 09:37:06.714924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.409 [2024-12-13 09:37:06.715455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.409 [2024-12-13 09:37:06.715464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.409 [2024-12-13 09:37:06.715470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.409 [2024-12-13 09:37:06.715476] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.409 [2024-12-13 09:37:06.726619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.409 [2024-12-13 09:37:06.727014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-12-13 09:37:06.727029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.409 [2024-12-13 09:37:06.727036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.409 [2024-12-13 09:37:06.727195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.409 [2024-12-13 09:37:06.727354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.409 [2024-12-13 09:37:06.727361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.409 [2024-12-13 09:37:06.727367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.409 [2024-12-13 09:37:06.727373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.409 [2024-12-13 09:37:06.739371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.409 [2024-12-13 09:37:06.739788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-12-13 09:37:06.739805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.409 [2024-12-13 09:37:06.739812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.409 [2024-12-13 09:37:06.739979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.409 [2024-12-13 09:37:06.740150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.409 [2024-12-13 09:37:06.740158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.409 [2024-12-13 09:37:06.740167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.410 [2024-12-13 09:37:06.740173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.410 [2024-12-13 09:37:06.752109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.410 [2024-12-13 09:37:06.752496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-12-13 09:37:06.752511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.410 [2024-12-13 09:37:06.752518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.410 [2024-12-13 09:37:06.752677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.410 [2024-12-13 09:37:06.752836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.410 [2024-12-13 09:37:06.752843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.410 [2024-12-13 09:37:06.752849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.410 [2024-12-13 09:37:06.752855] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.410 [2024-12-13 09:37:06.765052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.410 [2024-12-13 09:37:06.765440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-12-13 09:37:06.765462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.410 [2024-12-13 09:37:06.765469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.410 [2024-12-13 09:37:06.765641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.410 [2024-12-13 09:37:06.765815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.410 [2024-12-13 09:37:06.765823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.410 [2024-12-13 09:37:06.765829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.410 [2024-12-13 09:37:06.765835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.669 [2024-12-13 09:37:06.777945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.669 [2024-12-13 09:37:06.778307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.669 [2024-12-13 09:37:06.778324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.669 [2024-12-13 09:37:06.778331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.669 [2024-12-13 09:37:06.778509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.669 [2024-12-13 09:37:06.778683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.669 [2024-12-13 09:37:06.778691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.669 [2024-12-13 09:37:06.778698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.670 [2024-12-13 09:37:06.778704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.670 [2024-12-13 09:37:06.790805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.670 [2024-12-13 09:37:06.791199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.670 [2024-12-13 09:37:06.791214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.670 [2024-12-13 09:37:06.791221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.670 [2024-12-13 09:37:06.791380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.670 [2024-12-13 09:37:06.791564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.670 [2024-12-13 09:37:06.791572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.670 [2024-12-13 09:37:06.791578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.670 [2024-12-13 09:37:06.791584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.670 [2024-12-13 09:37:06.803657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.670 [2024-12-13 09:37:06.804071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.670 [2024-12-13 09:37:06.804087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.670 [2024-12-13 09:37:06.804095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.670 [2024-12-13 09:37:06.804267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.670 [2024-12-13 09:37:06.804441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.670 [2024-12-13 09:37:06.804455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.670 [2024-12-13 09:37:06.804462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.670 [2024-12-13 09:37:06.804468] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.670 [2024-12-13 09:37:06.816755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.670 [2024-12-13 09:37:06.817097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.670 [2024-12-13 09:37:06.817114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.670 [2024-12-13 09:37:06.817122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.670 [2024-12-13 09:37:06.817295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.670 [2024-12-13 09:37:06.817474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.670 [2024-12-13 09:37:06.817484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.670 [2024-12-13 09:37:06.817492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.670 [2024-12-13 09:37:06.817499] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.670 [2024-12-13 09:37:06.829873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.670 [2024-12-13 09:37:06.830285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.670 [2024-12-13 09:37:06.830301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.670 [2024-12-13 09:37:06.830312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.670 [2024-12-13 09:37:06.830490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.670 [2024-12-13 09:37:06.830664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.670 [2024-12-13 09:37:06.830672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.670 [2024-12-13 09:37:06.830679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.670 [2024-12-13 09:37:06.830685] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.670 [2024-12-13 09:37:06.842844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.670 [2024-12-13 09:37:06.843247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.670 [2024-12-13 09:37:06.843263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.670 [2024-12-13 09:37:06.843270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.670 [2024-12-13 09:37:06.843437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.670 [2024-12-13 09:37:06.843610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.670 [2024-12-13 09:37:06.843618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.670 [2024-12-13 09:37:06.843625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.670 [2024-12-13 09:37:06.843631] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.670 [2024-12-13 09:37:06.855798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.670 [2024-12-13 09:37:06.856229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.670 [2024-12-13 09:37:06.856274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.670 [2024-12-13 09:37:06.856297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.670 [2024-12-13 09:37:06.856793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.670 [2024-12-13 09:37:06.856962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.670 [2024-12-13 09:37:06.856970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.670 [2024-12-13 09:37:06.856976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.670 [2024-12-13 09:37:06.856982] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.670 [2024-12-13 09:37:06.868661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.670 [2024-12-13 09:37:06.869046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.670 [2024-12-13 09:37:06.869061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.670 [2024-12-13 09:37:06.869068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.670 [2024-12-13 09:37:06.869227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.670 [2024-12-13 09:37:06.869389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.670 [2024-12-13 09:37:06.869396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.670 [2024-12-13 09:37:06.869402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.670 [2024-12-13 09:37:06.869408] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.670 10069.33 IOPS, 39.33 MiB/s [2024-12-13T08:37:07.036Z] [2024-12-13 09:37:06.881397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.670 [2024-12-13 09:37:06.881791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.670 [2024-12-13 09:37:06.881808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.670 [2024-12-13 09:37:06.881814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.670 [2024-12-13 09:37:06.881973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.670 [2024-12-13 09:37:06.882132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.670 [2024-12-13 09:37:06.882139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.670 [2024-12-13 09:37:06.882145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.670 [2024-12-13 09:37:06.882151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.670 [2024-12-13 09:37:06.894146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.670 [2024-12-13 09:37:06.894536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.670 [2024-12-13 09:37:06.894552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.670 [2024-12-13 09:37:06.894559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.670 [2024-12-13 09:37:06.894717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.670 [2024-12-13 09:37:06.894876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.670 [2024-12-13 09:37:06.894883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.670 [2024-12-13 09:37:06.894889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.670 [2024-12-13 09:37:06.894895] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.670 [2024-12-13 09:37:06.906889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.670 [2024-12-13 09:37:06.907331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.670 [2024-12-13 09:37:06.907347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.670 [2024-12-13 09:37:06.907354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.670 [2024-12-13 09:37:06.907527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.670 [2024-12-13 09:37:06.907695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.671 [2024-12-13 09:37:06.907703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.671 [2024-12-13 09:37:06.907713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.671 [2024-12-13 09:37:06.907720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.671 [2024-12-13 09:37:06.919695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.671 [2024-12-13 09:37:06.920081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.671 [2024-12-13 09:37:06.920097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.671 [2024-12-13 09:37:06.920104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.671 [2024-12-13 09:37:06.920263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.671 [2024-12-13 09:37:06.920421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.671 [2024-12-13 09:37:06.920429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.671 [2024-12-13 09:37:06.920435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.671 [2024-12-13 09:37:06.920440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.671 [2024-12-13 09:37:06.932437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.671 [2024-12-13 09:37:06.932850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.671 [2024-12-13 09:37:06.932866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.671 [2024-12-13 09:37:06.932873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.671 [2024-12-13 09:37:06.933041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.671 [2024-12-13 09:37:06.933213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.671 [2024-12-13 09:37:06.933221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.671 [2024-12-13 09:37:06.933228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.671 [2024-12-13 09:37:06.933234] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.671 [2024-12-13 09:37:06.945218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.671 [2024-12-13 09:37:06.945571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.671 [2024-12-13 09:37:06.945588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.671 [2024-12-13 09:37:06.945595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.671 [2024-12-13 09:37:06.945763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.671 [2024-12-13 09:37:06.945930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.671 [2024-12-13 09:37:06.945938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.671 [2024-12-13 09:37:06.945944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.671 [2024-12-13 09:37:06.945951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.671 [2024-12-13 09:37:06.957992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.671 [2024-12-13 09:37:06.958316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.671 [2024-12-13 09:37:06.958331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.671 [2024-12-13 09:37:06.958338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.671 [2024-12-13 09:37:06.958512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.671 [2024-12-13 09:37:06.958680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.671 [2024-12-13 09:37:06.958687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.671 [2024-12-13 09:37:06.958694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.671 [2024-12-13 09:37:06.958700] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.671 [2024-12-13 09:37:06.970771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.671 [2024-12-13 09:37:06.971169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.671 [2024-12-13 09:37:06.971185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.671 [2024-12-13 09:37:06.971192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.671 [2024-12-13 09:37:06.971359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.671 [2024-12-13 09:37:06.971541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.671 [2024-12-13 09:37:06.971550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.671 [2024-12-13 09:37:06.971556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.671 [2024-12-13 09:37:06.971563] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.671 [2024-12-13 09:37:06.983606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.671 [2024-12-13 09:37:06.984026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.671 [2024-12-13 09:37:06.984042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.671 [2024-12-13 09:37:06.984049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.671 [2024-12-13 09:37:06.984217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.671 [2024-12-13 09:37:06.984384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.671 [2024-12-13 09:37:06.984392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.671 [2024-12-13 09:37:06.984399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.671 [2024-12-13 09:37:06.984404] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.671 [2024-12-13 09:37:06.996401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.671 [2024-12-13 09:37:06.996758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.671 [2024-12-13 09:37:06.996778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.671 [2024-12-13 09:37:06.996785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.671 [2024-12-13 09:37:06.996953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.671 [2024-12-13 09:37:06.997125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.671 [2024-12-13 09:37:06.997132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.671 [2024-12-13 09:37:06.997139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.671 [2024-12-13 09:37:06.997144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.671 [2024-12-13 09:37:07.009228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.671 [2024-12-13 09:37:07.009621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.671 [2024-12-13 09:37:07.009637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.671 [2024-12-13 09:37:07.009644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.671 [2024-12-13 09:37:07.009803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.671 [2024-12-13 09:37:07.009961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.671 [2024-12-13 09:37:07.009969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.671 [2024-12-13 09:37:07.009975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.671 [2024-12-13 09:37:07.009981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.671 [2024-12-13 09:37:07.021970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.671 [2024-12-13 09:37:07.022359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.671 [2024-12-13 09:37:07.022374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.671 [2024-12-13 09:37:07.022381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.671 [2024-12-13 09:37:07.022565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.671 [2024-12-13 09:37:07.022733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.671 [2024-12-13 09:37:07.022741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.671 [2024-12-13 09:37:07.022747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.671 [2024-12-13 09:37:07.022754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.671 [2024-12-13 09:37:07.035065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.934 [2024-12-13 09:37:07.035485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.934 [2024-12-13 09:37:07.035502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.934 [2024-12-13 09:37:07.035509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.934 [2024-12-13 09:37:07.035686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.934 [2024-12-13 09:37:07.035860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.934 [2024-12-13 09:37:07.035868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.934 [2024-12-13 09:37:07.035874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.934 [2024-12-13 09:37:07.035881] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.934 [2024-12-13 09:37:07.047869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.934 [2024-12-13 09:37:07.048271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.934 [2024-12-13 09:37:07.048314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.934 [2024-12-13 09:37:07.048338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.934 [2024-12-13 09:37:07.048934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.934 [2024-12-13 09:37:07.049307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.934 [2024-12-13 09:37:07.049314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.934 [2024-12-13 09:37:07.049320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.934 [2024-12-13 09:37:07.049326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.934 [2024-12-13 09:37:07.060652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.934 [2024-12-13 09:37:07.061065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.934 [2024-12-13 09:37:07.061082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.934 [2024-12-13 09:37:07.061089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.934 [2024-12-13 09:37:07.061262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.934 [2024-12-13 09:37:07.061434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.934 [2024-12-13 09:37:07.061443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.934 [2024-12-13 09:37:07.061455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.934 [2024-12-13 09:37:07.061462] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.934 [2024-12-13 09:37:07.073768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.934 [2024-12-13 09:37:07.074188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.934 [2024-12-13 09:37:07.074233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.934 [2024-12-13 09:37:07.074256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.934 [2024-12-13 09:37:07.074853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.934 [2024-12-13 09:37:07.075254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.934 [2024-12-13 09:37:07.075262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.934 [2024-12-13 09:37:07.075272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.934 [2024-12-13 09:37:07.075278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.934 [2024-12-13 09:37:07.086537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.934 [2024-12-13 09:37:07.086962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.934 [2024-12-13 09:37:07.086979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.934 [2024-12-13 09:37:07.086986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.934 [2024-12-13 09:37:07.087154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.934 [2024-12-13 09:37:07.087322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.934 [2024-12-13 09:37:07.087330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.934 [2024-12-13 09:37:07.087336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.934 [2024-12-13 09:37:07.087342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.934 [2024-12-13 09:37:07.099374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.934 [2024-12-13 09:37:07.099811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.934 [2024-12-13 09:37:07.099857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.934 [2024-12-13 09:37:07.099880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.934 [2024-12-13 09:37:07.100385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.934 [2024-12-13 09:37:07.100595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.934 [2024-12-13 09:37:07.100608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.934 [2024-12-13 09:37:07.100618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.934 [2024-12-13 09:37:07.100628] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.934 [2024-12-13 09:37:07.113034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.934 [2024-12-13 09:37:07.113385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.934 [2024-12-13 09:37:07.113402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.934 [2024-12-13 09:37:07.113410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.934 [2024-12-13 09:37:07.113599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.934 [2024-12-13 09:37:07.113787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.934 [2024-12-13 09:37:07.113796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.934 [2024-12-13 09:37:07.113803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.934 [2024-12-13 09:37:07.113810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.934 [2024-12-13 09:37:07.125995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.934 [2024-12-13 09:37:07.126351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.934 [2024-12-13 09:37:07.126368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.934 [2024-12-13 09:37:07.126376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.934 [2024-12-13 09:37:07.126554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.934 [2024-12-13 09:37:07.126735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.934 [2024-12-13 09:37:07.126743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.934 [2024-12-13 09:37:07.126749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.934 [2024-12-13 09:37:07.126755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.934 [2024-12-13 09:37:07.139001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.934 [2024-12-13 09:37:07.139337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.934 [2024-12-13 09:37:07.139353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.934 [2024-12-13 09:37:07.139360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.934 [2024-12-13 09:37:07.139538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.934 [2024-12-13 09:37:07.139711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.934 [2024-12-13 09:37:07.139719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.934 [2024-12-13 09:37:07.139726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.934 [2024-12-13 09:37:07.139732] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.934 [2024-12-13 09:37:07.151935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.934 [2024-12-13 09:37:07.152213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.934 [2024-12-13 09:37:07.152228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.934 [2024-12-13 09:37:07.152235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.934 [2024-12-13 09:37:07.152403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.934 [2024-12-13 09:37:07.152577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.934 [2024-12-13 09:37:07.152586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.935 [2024-12-13 09:37:07.152592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.935 [2024-12-13 09:37:07.152598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.935 [2024-12-13 09:37:07.164961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.935 [2024-12-13 09:37:07.165274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.935 [2024-12-13 09:37:07.165294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.935 [2024-12-13 09:37:07.165301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.935 [2024-12-13 09:37:07.165475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.935 [2024-12-13 09:37:07.165643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.935 [2024-12-13 09:37:07.165651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.935 [2024-12-13 09:37:07.165657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.935 [2024-12-13 09:37:07.165663] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.935 [2024-12-13 09:37:07.177793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.935 [2024-12-13 09:37:07.178130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.935 [2024-12-13 09:37:07.178146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.935 [2024-12-13 09:37:07.178153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.935 [2024-12-13 09:37:07.178321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.935 [2024-12-13 09:37:07.178493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.935 [2024-12-13 09:37:07.178502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.935 [2024-12-13 09:37:07.178508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.935 [2024-12-13 09:37:07.178514] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.935 [2024-12-13 09:37:07.190683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.935 [2024-12-13 09:37:07.190943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.935 [2024-12-13 09:37:07.190959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.935 [2024-12-13 09:37:07.190966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.935 [2024-12-13 09:37:07.191133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.935 [2024-12-13 09:37:07.191301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.935 [2024-12-13 09:37:07.191309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.935 [2024-12-13 09:37:07.191315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.935 [2024-12-13 09:37:07.191321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.935 [2024-12-13 09:37:07.203645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.935 [2024-12-13 09:37:07.203998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.935 [2024-12-13 09:37:07.204014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.935 [2024-12-13 09:37:07.204021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.935 [2024-12-13 09:37:07.204192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.935 [2024-12-13 09:37:07.204362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.935 [2024-12-13 09:37:07.204370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.935 [2024-12-13 09:37:07.204393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.935 [2024-12-13 09:37:07.204400] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.935 [2024-12-13 09:37:07.216560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.935 [2024-12-13 09:37:07.216905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.935 [2024-12-13 09:37:07.216955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.935 [2024-12-13 09:37:07.216978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.935 [2024-12-13 09:37:07.217503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.935 [2024-12-13 09:37:07.217673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.935 [2024-12-13 09:37:07.217681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.935 [2024-12-13 09:37:07.217687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.935 [2024-12-13 09:37:07.217693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.935 [2024-12-13 09:37:07.229521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.935 [2024-12-13 09:37:07.229870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.935 [2024-12-13 09:37:07.229886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.935 [2024-12-13 09:37:07.229893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.935 [2024-12-13 09:37:07.230061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.935 [2024-12-13 09:37:07.230229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.935 [2024-12-13 09:37:07.230236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.935 [2024-12-13 09:37:07.230243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.935 [2024-12-13 09:37:07.230249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.935 [2024-12-13 09:37:07.242319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.935 [2024-12-13 09:37:07.242709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.935 [2024-12-13 09:37:07.242726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.935 [2024-12-13 09:37:07.242733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.935 [2024-12-13 09:37:07.242901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.935 [2024-12-13 09:37:07.243068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.935 [2024-12-13 09:37:07.243076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.935 [2024-12-13 09:37:07.243086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.935 [2024-12-13 09:37:07.243092] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.935 [2024-12-13 09:37:07.255304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.935 [2024-12-13 09:37:07.255666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.935 [2024-12-13 09:37:07.255684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.935 [2024-12-13 09:37:07.255691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.935 [2024-12-13 09:37:07.255859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.935 [2024-12-13 09:37:07.256028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.935 [2024-12-13 09:37:07.256036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.935 [2024-12-13 09:37:07.256042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.935 [2024-12-13 09:37:07.256048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.935 [2024-12-13 09:37:07.268094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.935 [2024-12-13 09:37:07.268367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.935 [2024-12-13 09:37:07.268383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.935 [2024-12-13 09:37:07.268390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.935 [2024-12-13 09:37:07.268562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.935 [2024-12-13 09:37:07.268731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.935 [2024-12-13 09:37:07.268739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.935 [2024-12-13 09:37:07.268746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.935 [2024-12-13 09:37:07.268752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.935 [2024-12-13 09:37:07.280936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.935 [2024-12-13 09:37:07.281273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.935 [2024-12-13 09:37:07.281322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.935 [2024-12-13 09:37:07.281345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.935 [2024-12-13 09:37:07.281869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.935 [2024-12-13 09:37:07.282038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.935 [2024-12-13 09:37:07.282046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.935 [2024-12-13 09:37:07.282053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.936 [2024-12-13 09:37:07.282059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:54.936 [2024-12-13 09:37:07.293802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:54.936 [2024-12-13 09:37:07.294144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.936 [2024-12-13 09:37:07.294160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:54.936 [2024-12-13 09:37:07.294168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:54.936 [2024-12-13 09:37:07.294335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:54.936 [2024-12-13 09:37:07.294510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:54.936 [2024-12-13 09:37:07.294519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:54.936 [2024-12-13 09:37:07.294525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:54.936 [2024-12-13 09:37:07.294531] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.195 [2024-12-13 09:37:07.306797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.195 [2024-12-13 09:37:07.307236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.195 [2024-12-13 09:37:07.307282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.195 [2024-12-13 09:37:07.307305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.195 [2024-12-13 09:37:07.307783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.195 [2024-12-13 09:37:07.307953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.195 [2024-12-13 09:37:07.307961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.195 [2024-12-13 09:37:07.307967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.195 [2024-12-13 09:37:07.307973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.195 [2024-12-13 09:37:07.319693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.195 [2024-12-13 09:37:07.320059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.195 [2024-12-13 09:37:07.320076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.195 [2024-12-13 09:37:07.320083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.195 [2024-12-13 09:37:07.320250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.195 [2024-12-13 09:37:07.320419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.195 [2024-12-13 09:37:07.320428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.195 [2024-12-13 09:37:07.320434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.195 [2024-12-13 09:37:07.320441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.195 [2024-12-13 09:37:07.332851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.195 [2024-12-13 09:37:07.333140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.195 [2024-12-13 09:37:07.333160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.195 [2024-12-13 09:37:07.333167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.195 [2024-12-13 09:37:07.333340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.195 [2024-12-13 09:37:07.333517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.195 [2024-12-13 09:37:07.333526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.195 [2024-12-13 09:37:07.333533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.195 [2024-12-13 09:37:07.333540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.195 [2024-12-13 09:37:07.345793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.195 [2024-12-13 09:37:07.346148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.195 [2024-12-13 09:37:07.346164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.195 [2024-12-13 09:37:07.346172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.195 [2024-12-13 09:37:07.346339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.195 [2024-12-13 09:37:07.346513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.195 [2024-12-13 09:37:07.346522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.195 [2024-12-13 09:37:07.346528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.195 [2024-12-13 09:37:07.346535] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.195 [2024-12-13 09:37:07.358710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.195 [2024-12-13 09:37:07.359080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.195 [2024-12-13 09:37:07.359097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.195 [2024-12-13 09:37:07.359104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.196 [2024-12-13 09:37:07.359272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.196 [2024-12-13 09:37:07.359440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.196 [2024-12-13 09:37:07.359452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.196 [2024-12-13 09:37:07.359460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.196 [2024-12-13 09:37:07.359466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.196 [2024-12-13 09:37:07.371445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.196 [2024-12-13 09:37:07.371878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.196 [2024-12-13 09:37:07.371895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.196 [2024-12-13 09:37:07.371902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.196 [2024-12-13 09:37:07.372073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.196 [2024-12-13 09:37:07.372246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.196 [2024-12-13 09:37:07.372254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.196 [2024-12-13 09:37:07.372260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.196 [2024-12-13 09:37:07.372266] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.196 [2024-12-13 09:37:07.384309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.196 [2024-12-13 09:37:07.384670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.196 [2024-12-13 09:37:07.384687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.196 [2024-12-13 09:37:07.384694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.196 [2024-12-13 09:37:07.384861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.196 [2024-12-13 09:37:07.385029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.196 [2024-12-13 09:37:07.385037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.196 [2024-12-13 09:37:07.385045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.196 [2024-12-13 09:37:07.385051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.196 [2024-12-13 09:37:07.397170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.196 [2024-12-13 09:37:07.397522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.196 [2024-12-13 09:37:07.397538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.196 [2024-12-13 09:37:07.397545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.196 [2024-12-13 09:37:07.397714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.196 [2024-12-13 09:37:07.397881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.196 [2024-12-13 09:37:07.397889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.196 [2024-12-13 09:37:07.397895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.196 [2024-12-13 09:37:07.397902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.196 [2024-12-13 09:37:07.409956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.196 [2024-12-13 09:37:07.410379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.196 [2024-12-13 09:37:07.410395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.196 [2024-12-13 09:37:07.410402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.196 [2024-12-13 09:37:07.410574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.196 [2024-12-13 09:37:07.410743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.196 [2024-12-13 09:37:07.410751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.196 [2024-12-13 09:37:07.410761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.196 [2024-12-13 09:37:07.410767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.196 [2024-12-13 09:37:07.422733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.196 [2024-12-13 09:37:07.423108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.196 [2024-12-13 09:37:07.423124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.196 [2024-12-13 09:37:07.423131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.196 [2024-12-13 09:37:07.423299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.196 [2024-12-13 09:37:07.423473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.196 [2024-12-13 09:37:07.423482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.196 [2024-12-13 09:37:07.423488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.196 [2024-12-13 09:37:07.423494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.196 [2024-12-13 09:37:07.435567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.196 [2024-12-13 09:37:07.435914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.196 [2024-12-13 09:37:07.435931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.196 [2024-12-13 09:37:07.435938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.196 [2024-12-13 09:37:07.436106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.196 [2024-12-13 09:37:07.436274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.196 [2024-12-13 09:37:07.436282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.196 [2024-12-13 09:37:07.436288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.196 [2024-12-13 09:37:07.436294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.196 [2024-12-13 09:37:07.448514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.196 [2024-12-13 09:37:07.448913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.196 [2024-12-13 09:37:07.448956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.196 [2024-12-13 09:37:07.448979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.196 [2024-12-13 09:37:07.449406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.196 [2024-12-13 09:37:07.449581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.196 [2024-12-13 09:37:07.449590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.196 [2024-12-13 09:37:07.449596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.196 [2024-12-13 09:37:07.449602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.196 [2024-12-13 09:37:07.461622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.196 [2024-12-13 09:37:07.462005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.196 [2024-12-13 09:37:07.462022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.196 [2024-12-13 09:37:07.462029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.196 [2024-12-13 09:37:07.462195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.196 [2024-12-13 09:37:07.462363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.196 [2024-12-13 09:37:07.462371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.196 [2024-12-13 09:37:07.462377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.196 [2024-12-13 09:37:07.462383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.196 [2024-12-13 09:37:07.474441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.196 [2024-12-13 09:37:07.474792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.196 [2024-12-13 09:37:07.474808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.196 [2024-12-13 09:37:07.474815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.196 [2024-12-13 09:37:07.474983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.196 [2024-12-13 09:37:07.475151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.196 [2024-12-13 09:37:07.475159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.196 [2024-12-13 09:37:07.475165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.196 [2024-12-13 09:37:07.475171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.196 [2024-12-13 09:37:07.487331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.196 [2024-12-13 09:37:07.487602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.196 [2024-12-13 09:37:07.487618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.196 [2024-12-13 09:37:07.487625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.196 [2024-12-13 09:37:07.487793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.197 [2024-12-13 09:37:07.487961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.197 [2024-12-13 09:37:07.487969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.197 [2024-12-13 09:37:07.487975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.197 [2024-12-13 09:37:07.487981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.197 [2024-12-13 09:37:07.500100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.197 [2024-12-13 09:37:07.500472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.197 [2024-12-13 09:37:07.500508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.197 [2024-12-13 09:37:07.500516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.197 [2024-12-13 09:37:07.500689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.197 [2024-12-13 09:37:07.500863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.197 [2024-12-13 09:37:07.500871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.197 [2024-12-13 09:37:07.500878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.197 [2024-12-13 09:37:07.500884] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.197 [2024-12-13 09:37:07.512893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.197 [2024-12-13 09:37:07.513225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.197 [2024-12-13 09:37:07.513241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.197 [2024-12-13 09:37:07.513249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.197 [2024-12-13 09:37:07.513417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.197 [2024-12-13 09:37:07.513590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.197 [2024-12-13 09:37:07.513599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.197 [2024-12-13 09:37:07.513605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.197 [2024-12-13 09:37:07.513611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.197 [2024-12-13 09:37:07.525696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.197 [2024-12-13 09:37:07.526056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.197 [2024-12-13 09:37:07.526072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.197 [2024-12-13 09:37:07.526080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.197 [2024-12-13 09:37:07.526248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.197 [2024-12-13 09:37:07.526416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.197 [2024-12-13 09:37:07.526424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.197 [2024-12-13 09:37:07.526431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.197 [2024-12-13 09:37:07.526437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.197 [2024-12-13 09:37:07.538616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.197 [2024-12-13 09:37:07.539018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.197 [2024-12-13 09:37:07.539034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.197 [2024-12-13 09:37:07.539040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.197 [2024-12-13 09:37:07.539211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.197 [2024-12-13 09:37:07.539383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.197 [2024-12-13 09:37:07.539391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.197 [2024-12-13 09:37:07.539397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.197 [2024-12-13 09:37:07.539403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.197 [2024-12-13 09:37:07.551688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.197 [2024-12-13 09:37:07.552109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.197 [2024-12-13 09:37:07.552125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.197 [2024-12-13 09:37:07.552132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.197 [2024-12-13 09:37:07.552304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.197 [2024-12-13 09:37:07.552484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.197 [2024-12-13 09:37:07.552493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.197 [2024-12-13 09:37:07.552499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.197 [2024-12-13 09:37:07.552505] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.456 [2024-12-13 09:37:07.564668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.456 [2024-12-13 09:37:07.565011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.456 [2024-12-13 09:37:07.565028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.456 [2024-12-13 09:37:07.565035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.456 [2024-12-13 09:37:07.565208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.456 [2024-12-13 09:37:07.565382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.456 [2024-12-13 09:37:07.565390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.457 [2024-12-13 09:37:07.565397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.457 [2024-12-13 09:37:07.565403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.457 [2024-12-13 09:37:07.577524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.457 [2024-12-13 09:37:07.577917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.457 [2024-12-13 09:37:07.577962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.457 [2024-12-13 09:37:07.577986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.457 [2024-12-13 09:37:07.578541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.457 [2024-12-13 09:37:07.578716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.457 [2024-12-13 09:37:07.578725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.457 [2024-12-13 09:37:07.578737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.457 [2024-12-13 09:37:07.578744] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.457 [2024-12-13 09:37:07.590593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.457 [2024-12-13 09:37:07.590938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.457 [2024-12-13 09:37:07.590955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.457 [2024-12-13 09:37:07.590962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.457 [2024-12-13 09:37:07.591135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.457 [2024-12-13 09:37:07.591308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.457 [2024-12-13 09:37:07.591316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.457 [2024-12-13 09:37:07.591323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.457 [2024-12-13 09:37:07.591329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.457 [2024-12-13 09:37:07.603631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.457 [2024-12-13 09:37:07.604058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.457 [2024-12-13 09:37:07.604103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.457 [2024-12-13 09:37:07.604126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.457 [2024-12-13 09:37:07.604595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.457 [2024-12-13 09:37:07.604764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.457 [2024-12-13 09:37:07.604772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.457 [2024-12-13 09:37:07.604778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.457 [2024-12-13 09:37:07.604784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.457 [2024-12-13 09:37:07.616426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.457 [2024-12-13 09:37:07.616778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.457 [2024-12-13 09:37:07.616795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.457 [2024-12-13 09:37:07.616801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.457 [2024-12-13 09:37:07.616961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.457 [2024-12-13 09:37:07.617119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.457 [2024-12-13 09:37:07.617127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.457 [2024-12-13 09:37:07.617133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.457 [2024-12-13 09:37:07.617139] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.457 [2024-12-13 09:37:07.629326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.457 [2024-12-13 09:37:07.629704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.457 [2024-12-13 09:37:07.629721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.457 [2024-12-13 09:37:07.629728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.457 [2024-12-13 09:37:07.629896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.457 [2024-12-13 09:37:07.630064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.457 [2024-12-13 09:37:07.630072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.457 [2024-12-13 09:37:07.630078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.457 [2024-12-13 09:37:07.630083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.457 [2024-12-13 09:37:07.642231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.457 [2024-12-13 09:37:07.642646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.457 [2024-12-13 09:37:07.642663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.457 [2024-12-13 09:37:07.642670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.457 [2024-12-13 09:37:07.642837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.457 [2024-12-13 09:37:07.643005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.457 [2024-12-13 09:37:07.643013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.457 [2024-12-13 09:37:07.643019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.457 [2024-12-13 09:37:07.643025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.457 [2024-12-13 09:37:07.655082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.457 [2024-12-13 09:37:07.655500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.457 [2024-12-13 09:37:07.655517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.457 [2024-12-13 09:37:07.655524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.457 [2024-12-13 09:37:07.655692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.457 [2024-12-13 09:37:07.655860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.457 [2024-12-13 09:37:07.655868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.457 [2024-12-13 09:37:07.655874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.457 [2024-12-13 09:37:07.655890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.457 [2024-12-13 09:37:07.667884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.457 [2024-12-13 09:37:07.668307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.457 [2024-12-13 09:37:07.668328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.457 [2024-12-13 09:37:07.668335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.457 [2024-12-13 09:37:07.668510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.457 [2024-12-13 09:37:07.668679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.457 [2024-12-13 09:37:07.668687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.457 [2024-12-13 09:37:07.668693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.457 [2024-12-13 09:37:07.668699] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.457 [2024-12-13 09:37:07.680689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.457 [2024-12-13 09:37:07.681083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.457 [2024-12-13 09:37:07.681099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.457 [2024-12-13 09:37:07.681106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.457 [2024-12-13 09:37:07.681265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.457 [2024-12-13 09:37:07.681423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.457 [2024-12-13 09:37:07.681431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.457 [2024-12-13 09:37:07.681437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.457 [2024-12-13 09:37:07.681443] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.457 [2024-12-13 09:37:07.693454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.457 [2024-12-13 09:37:07.693869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.457 [2024-12-13 09:37:07.693884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.457 [2024-12-13 09:37:07.693890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.457 [2024-12-13 09:37:07.694049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.457 [2024-12-13 09:37:07.694208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.457 [2024-12-13 09:37:07.694215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.457 [2024-12-13 09:37:07.694221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.458 [2024-12-13 09:37:07.694227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.458 [2024-12-13 09:37:07.706313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.458 [2024-12-13 09:37:07.706751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.458 [2024-12-13 09:37:07.706798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.458 [2024-12-13 09:37:07.706821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.458 [2024-12-13 09:37:07.707413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.458 [2024-12-13 09:37:07.707907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.458 [2024-12-13 09:37:07.707920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.458 [2024-12-13 09:37:07.707930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.458 [2024-12-13 09:37:07.707940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.458 [2024-12-13 09:37:07.719972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.458 [2024-12-13 09:37:07.720391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.458 [2024-12-13 09:37:07.720437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.458 [2024-12-13 09:37:07.720474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.458 [2024-12-13 09:37:07.720932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.458 [2024-12-13 09:37:07.721116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.458 [2024-12-13 09:37:07.721124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.458 [2024-12-13 09:37:07.721131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.458 [2024-12-13 09:37:07.721137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.458 [2024-12-13 09:37:07.732811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.458 [2024-12-13 09:37:07.733210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.458 [2024-12-13 09:37:07.733226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.458 [2024-12-13 09:37:07.733233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.458 [2024-12-13 09:37:07.733391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.458 [2024-12-13 09:37:07.733578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.458 [2024-12-13 09:37:07.733586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.458 [2024-12-13 09:37:07.733592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.458 [2024-12-13 09:37:07.733598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.458 [2024-12-13 09:37:07.745583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.458 [2024-12-13 09:37:07.745999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.458 [2024-12-13 09:37:07.746015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.458 [2024-12-13 09:37:07.746022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.458 [2024-12-13 09:37:07.746190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.458 [2024-12-13 09:37:07.746357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.458 [2024-12-13 09:37:07.746365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.458 [2024-12-13 09:37:07.746375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.458 [2024-12-13 09:37:07.746381] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.458 [2024-12-13 09:37:07.758458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.458 [2024-12-13 09:37:07.758924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.458 [2024-12-13 09:37:07.758968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.458 [2024-12-13 09:37:07.758990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.458 [2024-12-13 09:37:07.759586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.458 [2024-12-13 09:37:07.760004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.458 [2024-12-13 09:37:07.760012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.458 [2024-12-13 09:37:07.760018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.458 [2024-12-13 09:37:07.760024] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.458 [2024-12-13 09:37:07.771234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.458 [2024-12-13 09:37:07.771659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.458 [2024-12-13 09:37:07.771675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.458 [2024-12-13 09:37:07.771682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.458 [2024-12-13 09:37:07.771851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.458 [2024-12-13 09:37:07.772019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.458 [2024-12-13 09:37:07.772027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.458 [2024-12-13 09:37:07.772033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.458 [2024-12-13 09:37:07.772039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.458 [2024-12-13 09:37:07.784034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.458 [2024-12-13 09:37:07.784435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.458 [2024-12-13 09:37:07.784454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.458 [2024-12-13 09:37:07.784461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.458 [2024-12-13 09:37:07.784643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.458 [2024-12-13 09:37:07.784810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.458 [2024-12-13 09:37:07.784818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.458 [2024-12-13 09:37:07.784824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.458 [2024-12-13 09:37:07.784830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.458 [2024-12-13 09:37:07.796859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.458 [2024-12-13 09:37:07.797250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.458 [2024-12-13 09:37:07.797266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.458 [2024-12-13 09:37:07.797272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.458 [2024-12-13 09:37:07.797431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.458 [2024-12-13 09:37:07.797619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.458 [2024-12-13 09:37:07.797627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.458 [2024-12-13 09:37:07.797633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.458 [2024-12-13 09:37:07.797639] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.458 [2024-12-13 09:37:07.809657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.458 [2024-12-13 09:37:07.810094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.458 [2024-12-13 09:37:07.810138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.458 [2024-12-13 09:37:07.810160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.458 [2024-12-13 09:37:07.810757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.458 [2024-12-13 09:37:07.810944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.458 [2024-12-13 09:37:07.810952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.458 [2024-12-13 09:37:07.810959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.458 [2024-12-13 09:37:07.810964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.718 [2024-12-13 09:37:07.822697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.718 [2024-12-13 09:37:07.823123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.718 [2024-12-13 09:37:07.823167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.718 [2024-12-13 09:37:07.823189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.718 [2024-12-13 09:37:07.823787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.718 [2024-12-13 09:37:07.824226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.718 [2024-12-13 09:37:07.824234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.718 [2024-12-13 09:37:07.824241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.718 [2024-12-13 09:37:07.824247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.718 [2024-12-13 09:37:07.835466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.718 [2024-12-13 09:37:07.835874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.718 [2024-12-13 09:37:07.835894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.718 [2024-12-13 09:37:07.835901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.718 [2024-12-13 09:37:07.836074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.718 [2024-12-13 09:37:07.836247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.718 [2024-12-13 09:37:07.836256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.718 [2024-12-13 09:37:07.836263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.718 [2024-12-13 09:37:07.836269] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.718 [2024-12-13 09:37:07.848478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.718 [2024-12-13 09:37:07.848887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.718 [2024-12-13 09:37:07.848903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.718 [2024-12-13 09:37:07.848910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.718 [2024-12-13 09:37:07.849082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.718 [2024-12-13 09:37:07.849256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.718 [2024-12-13 09:37:07.849264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.718 [2024-12-13 09:37:07.849270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.718 [2024-12-13 09:37:07.849277] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.718 [2024-12-13 09:37:07.861356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.718 [2024-12-13 09:37:07.861780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.718 [2024-12-13 09:37:07.861797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.718 [2024-12-13 09:37:07.861804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.718 [2024-12-13 09:37:07.861971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.718 [2024-12-13 09:37:07.862139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.719 [2024-12-13 09:37:07.862147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.719 [2024-12-13 09:37:07.862153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.719 [2024-12-13 09:37:07.862159] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.719 [2024-12-13 09:37:07.874172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.719 [2024-12-13 09:37:07.874566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.719 [2024-12-13 09:37:07.874582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.719 [2024-12-13 09:37:07.874589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.719 [2024-12-13 09:37:07.874760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.719 [2024-12-13 09:37:07.874923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.719 [2024-12-13 09:37:07.874931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.719 [2024-12-13 09:37:07.874937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.719 [2024-12-13 09:37:07.874942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.719 7552.00 IOPS, 29.50 MiB/s [2024-12-13T08:37:08.085Z] [2024-12-13 09:37:07.886898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.719 [2024-12-13 09:37:07.887315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.719 [2024-12-13 09:37:07.887332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.719 [2024-12-13 09:37:07.887339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.719 [2024-12-13 09:37:07.887514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.719 [2024-12-13 09:37:07.887683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.719 [2024-12-13 09:37:07.887691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.719 [2024-12-13 09:37:07.887697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.719 [2024-12-13 09:37:07.887703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.719 [2024-12-13 09:37:07.899690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.719 [2024-12-13 09:37:07.900120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.719 [2024-12-13 09:37:07.900164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.719 [2024-12-13 09:37:07.900187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.719 [2024-12-13 09:37:07.900783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.719 [2024-12-13 09:37:07.901279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.719 [2024-12-13 09:37:07.901287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.719 [2024-12-13 09:37:07.901293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.719 [2024-12-13 09:37:07.901299] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.719 [2024-12-13 09:37:07.912501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.719 [2024-12-13 09:37:07.912903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.719 [2024-12-13 09:37:07.912919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.719 [2024-12-13 09:37:07.912926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.719 [2024-12-13 09:37:07.913084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.719 [2024-12-13 09:37:07.913244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.719 [2024-12-13 09:37:07.913252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.719 [2024-12-13 09:37:07.913261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.719 [2024-12-13 09:37:07.913267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.719 [2024-12-13 09:37:07.925262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.719 [2024-12-13 09:37:07.925682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.719 [2024-12-13 09:37:07.925698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.719 [2024-12-13 09:37:07.925705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.719 [2024-12-13 09:37:07.925873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.719 [2024-12-13 09:37:07.926041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.719 [2024-12-13 09:37:07.926049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.719 [2024-12-13 09:37:07.926055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.719 [2024-12-13 09:37:07.926061] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.719 [2024-12-13 09:37:07.938058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.719 [2024-12-13 09:37:07.938472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.719 [2024-12-13 09:37:07.938489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.719 [2024-12-13 09:37:07.938496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.719 [2024-12-13 09:37:07.938664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.719 [2024-12-13 09:37:07.938832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.719 [2024-12-13 09:37:07.938840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.719 [2024-12-13 09:37:07.938846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.719 [2024-12-13 09:37:07.938852] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.719 [2024-12-13 09:37:07.950919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.719 [2024-12-13 09:37:07.951313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.719 [2024-12-13 09:37:07.951329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.719 [2024-12-13 09:37:07.951335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.719 [2024-12-13 09:37:07.951518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.719 [2024-12-13 09:37:07.951686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.719 [2024-12-13 09:37:07.951694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.719 [2024-12-13 09:37:07.951700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.719 [2024-12-13 09:37:07.951706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.719 [2024-12-13 09:37:07.963809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.719 [2024-12-13 09:37:07.964211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.719 [2024-12-13 09:37:07.964227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.719 [2024-12-13 09:37:07.964234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.719 [2024-12-13 09:37:07.964402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.719 [2024-12-13 09:37:07.964595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.719 [2024-12-13 09:37:07.964604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.719 [2024-12-13 09:37:07.964610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.719 [2024-12-13 09:37:07.964616] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.719 [2024-12-13 09:37:07.976738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.719 [2024-12-13 09:37:07.977174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.719 [2024-12-13 09:37:07.977219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.719 [2024-12-13 09:37:07.977242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.719 [2024-12-13 09:37:07.977838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.719 [2024-12-13 09:37:07.978410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.719 [2024-12-13 09:37:07.978418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.719 [2024-12-13 09:37:07.978424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.719 [2024-12-13 09:37:07.978430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.720 [2024-12-13 09:37:07.989667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.720 [2024-12-13 09:37:07.990083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.720 [2024-12-13 09:37:07.990099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.720 [2024-12-13 09:37:07.990106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.720 [2024-12-13 09:37:07.990273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.720 [2024-12-13 09:37:07.990441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.720 [2024-12-13 09:37:07.990454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.720 [2024-12-13 09:37:07.990461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.720 [2024-12-13 09:37:07.990466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.720 [2024-12-13 09:37:08.002611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.720 [2024-12-13 09:37:08.003036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.720 [2024-12-13 09:37:08.003056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.720 [2024-12-13 09:37:08.003063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.720 [2024-12-13 09:37:08.003230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.720 [2024-12-13 09:37:08.003398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.720 [2024-12-13 09:37:08.003406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.720 [2024-12-13 09:37:08.003412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.720 [2024-12-13 09:37:08.003418] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.720 [2024-12-13 09:37:08.015407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.720 [2024-12-13 09:37:08.015787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.720 [2024-12-13 09:37:08.015803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.720 [2024-12-13 09:37:08.015810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.720 [2024-12-13 09:37:08.015978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.720 [2024-12-13 09:37:08.016145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.720 [2024-12-13 09:37:08.016153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.720 [2024-12-13 09:37:08.016160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.720 [2024-12-13 09:37:08.016166] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.720 [2024-12-13 09:37:08.028269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.720 [2024-12-13 09:37:08.028684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.720 [2024-12-13 09:37:08.028730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.720 [2024-12-13 09:37:08.028753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.720 [2024-12-13 09:37:08.029276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.720 [2024-12-13 09:37:08.029445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.720 [2024-12-13 09:37:08.029459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.720 [2024-12-13 09:37:08.029465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.720 [2024-12-13 09:37:08.029472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.720 [2024-12-13 09:37:08.041099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.720 [2024-12-13 09:37:08.041510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.720 [2024-12-13 09:37:08.041554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.720 [2024-12-13 09:37:08.041577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.720 [2024-12-13 09:37:08.042166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.720 [2024-12-13 09:37:08.042411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.720 [2024-12-13 09:37:08.042419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.720 [2024-12-13 09:37:08.042425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.720 [2024-12-13 09:37:08.042431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.720 [2024-12-13 09:37:08.053916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.720 [2024-12-13 09:37:08.054313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.720 [2024-12-13 09:37:08.054358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.720 [2024-12-13 09:37:08.054380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.720 [2024-12-13 09:37:08.054979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.720 [2024-12-13 09:37:08.055590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.720 [2024-12-13 09:37:08.055598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.720 [2024-12-13 09:37:08.055605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.720 [2024-12-13 09:37:08.055611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.720 [2024-12-13 09:37:08.066713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.720 [2024-12-13 09:37:08.067138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.720 [2024-12-13 09:37:08.067182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.720 [2024-12-13 09:37:08.067205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.720 [2024-12-13 09:37:08.067690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.720 [2024-12-13 09:37:08.067859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.720 [2024-12-13 09:37:08.067867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.720 [2024-12-13 09:37:08.067874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.720 [2024-12-13 09:37:08.067880] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.720 [2024-12-13 09:37:08.079459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.720 [2024-12-13 09:37:08.079879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.720 [2024-12-13 09:37:08.079895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.720 [2024-12-13 09:37:08.079902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.720 [2024-12-13 09:37:08.080075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.720 [2024-12-13 09:37:08.080248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.720 [2024-12-13 09:37:08.080256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.720 [2024-12-13 09:37:08.080266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.720 [2024-12-13 09:37:08.080272] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.980 [2024-12-13 09:37:08.092439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.981 [2024-12-13 09:37:08.092858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.981 [2024-12-13 09:37:08.092875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.981 [2024-12-13 09:37:08.092882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.981 [2024-12-13 09:37:08.093055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.981 [2024-12-13 09:37:08.093229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.981 [2024-12-13 09:37:08.093237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.981 [2024-12-13 09:37:08.093243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.981 [2024-12-13 09:37:08.093249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.981 [2024-12-13 09:37:08.105446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.981 [2024-12-13 09:37:08.105858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.981 [2024-12-13 09:37:08.105874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.981 [2024-12-13 09:37:08.105881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.981 [2024-12-13 09:37:08.106053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.981 [2024-12-13 09:37:08.106228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.981 [2024-12-13 09:37:08.106236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.981 [2024-12-13 09:37:08.106243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.981 [2024-12-13 09:37:08.106249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.981 [2024-12-13 09:37:08.118386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.981 [2024-12-13 09:37:08.118832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.981 [2024-12-13 09:37:08.118850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.981 [2024-12-13 09:37:08.118857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.981 [2024-12-13 09:37:08.119030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.981 [2024-12-13 09:37:08.119203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.981 [2024-12-13 09:37:08.119211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.981 [2024-12-13 09:37:08.119218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.981 [2024-12-13 09:37:08.119223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.981 [2024-12-13 09:37:08.131294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.981 [2024-12-13 09:37:08.131748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.981 [2024-12-13 09:37:08.131794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.981 [2024-12-13 09:37:08.131817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.981 [2024-12-13 09:37:08.132398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.981 [2024-12-13 09:37:08.132932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.981 [2024-12-13 09:37:08.132941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.981 [2024-12-13 09:37:08.132948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.981 [2024-12-13 09:37:08.132953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.981 [2024-12-13 09:37:08.144137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.981 [2024-12-13 09:37:08.144575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.981 [2024-12-13 09:37:08.144591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.981 [2024-12-13 09:37:08.144599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.981 [2024-12-13 09:37:08.144773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.981 [2024-12-13 09:37:08.144933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.981 [2024-12-13 09:37:08.144941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.981 [2024-12-13 09:37:08.144946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.981 [2024-12-13 09:37:08.144952] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.981 [2024-12-13 09:37:08.157040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.981 [2024-12-13 09:37:08.157461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.981 [2024-12-13 09:37:08.157477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.981 [2024-12-13 09:37:08.157485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.981 [2024-12-13 09:37:08.157652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.981 [2024-12-13 09:37:08.157820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.981 [2024-12-13 09:37:08.157828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.981 [2024-12-13 09:37:08.157834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.981 [2024-12-13 09:37:08.157840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.981 [2024-12-13 09:37:08.169915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.981 [2024-12-13 09:37:08.170336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.981 [2024-12-13 09:37:08.170388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.981 [2024-12-13 09:37:08.170412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.981 [2024-12-13 09:37:08.170915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.981 [2024-12-13 09:37:08.171084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.981 [2024-12-13 09:37:08.171092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.981 [2024-12-13 09:37:08.171098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.981 [2024-12-13 09:37:08.171104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.981 [2024-12-13 09:37:08.182758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.981 [2024-12-13 09:37:08.183174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.981 [2024-12-13 09:37:08.183189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.981 [2024-12-13 09:37:08.183196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.981 [2024-12-13 09:37:08.183364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.981 [2024-12-13 09:37:08.183538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.981 [2024-12-13 09:37:08.183546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.981 [2024-12-13 09:37:08.183552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.981 [2024-12-13 09:37:08.183558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.981 [2024-12-13 09:37:08.195590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.981 [2024-12-13 09:37:08.196008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.981 [2024-12-13 09:37:08.196051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.981 [2024-12-13 09:37:08.196074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.981 [2024-12-13 09:37:08.196670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.981 [2024-12-13 09:37:08.197139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.981 [2024-12-13 09:37:08.197147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.981 [2024-12-13 09:37:08.197154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.981 [2024-12-13 09:37:08.197160] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.981 [2024-12-13 09:37:08.208319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.981 [2024-12-13 09:37:08.208728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.981 [2024-12-13 09:37:08.208745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.981 [2024-12-13 09:37:08.208752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.981 [2024-12-13 09:37:08.208925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.981 [2024-12-13 09:37:08.209093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.981 [2024-12-13 09:37:08.209101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.982 [2024-12-13 09:37:08.209107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.982 [2024-12-13 09:37:08.209113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.982 [2024-12-13 09:37:08.221183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.982 [2024-12-13 09:37:08.221593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.982 [2024-12-13 09:37:08.221610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.982 [2024-12-13 09:37:08.221617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.982 [2024-12-13 09:37:08.221785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.982 [2024-12-13 09:37:08.221953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.982 [2024-12-13 09:37:08.221960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.982 [2024-12-13 09:37:08.221966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.982 [2024-12-13 09:37:08.221972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.982 [2024-12-13 09:37:08.234141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.982 [2024-12-13 09:37:08.234582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.982 [2024-12-13 09:37:08.234627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.982 [2024-12-13 09:37:08.234649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.982 [2024-12-13 09:37:08.235102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.982 [2024-12-13 09:37:08.235261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.982 [2024-12-13 09:37:08.235269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.982 [2024-12-13 09:37:08.235275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.982 [2024-12-13 09:37:08.235280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.982 [2024-12-13 09:37:08.247110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.982 [2024-12-13 09:37:08.247500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.982 [2024-12-13 09:37:08.247552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.982 [2024-12-13 09:37:08.247575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.982 [2024-12-13 09:37:08.248122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.982 [2024-12-13 09:37:08.248291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.982 [2024-12-13 09:37:08.248299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.982 [2024-12-13 09:37:08.248308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.982 [2024-12-13 09:37:08.248315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.982 [2024-12-13 09:37:08.259946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.982 [2024-12-13 09:37:08.260312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.982 [2024-12-13 09:37:08.260327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.982 [2024-12-13 09:37:08.260334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.982 [2024-12-13 09:37:08.260508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.982 [2024-12-13 09:37:08.260677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.982 [2024-12-13 09:37:08.260685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.982 [2024-12-13 09:37:08.260691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.982 [2024-12-13 09:37:08.260697] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.982 [2024-12-13 09:37:08.272679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.982 [2024-12-13 09:37:08.273111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.982 [2024-12-13 09:37:08.273156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.982 [2024-12-13 09:37:08.273179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.982 [2024-12-13 09:37:08.273777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.982 [2024-12-13 09:37:08.274264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.982 [2024-12-13 09:37:08.274272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.982 [2024-12-13 09:37:08.274278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.982 [2024-12-13 09:37:08.274284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.982 [2024-12-13 09:37:08.285522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.982 [2024-12-13 09:37:08.285935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.982 [2024-12-13 09:37:08.285951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.982 [2024-12-13 09:37:08.285957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.982 [2024-12-13 09:37:08.286125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.982 [2024-12-13 09:37:08.286293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.982 [2024-12-13 09:37:08.286301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.982 [2024-12-13 09:37:08.286308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.982 [2024-12-13 09:37:08.286313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.982 [2024-12-13 09:37:08.298334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.982 [2024-12-13 09:37:08.298755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.982 [2024-12-13 09:37:08.298772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.982 [2024-12-13 09:37:08.298779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.982 [2024-12-13 09:37:08.298947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.982 [2024-12-13 09:37:08.299114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.982 [2024-12-13 09:37:08.299122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.982 [2024-12-13 09:37:08.299128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.982 [2024-12-13 09:37:08.299134] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.982 [2024-12-13 09:37:08.311074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.982 [2024-12-13 09:37:08.311489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.982 [2024-12-13 09:37:08.311505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.982 [2024-12-13 09:37:08.311512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.982 [2024-12-13 09:37:08.311680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.982 [2024-12-13 09:37:08.311850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.982 [2024-12-13 09:37:08.311858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.982 [2024-12-13 09:37:08.311864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.982 [2024-12-13 09:37:08.311870] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.982 [2024-12-13 09:37:08.323931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.982 [2024-12-13 09:37:08.324328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.982 [2024-12-13 09:37:08.324344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.982 [2024-12-13 09:37:08.324351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.982 [2024-12-13 09:37:08.324534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.982 [2024-12-13 09:37:08.324703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.982 [2024-12-13 09:37:08.324710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.982 [2024-12-13 09:37:08.324716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.982 [2024-12-13 09:37:08.324723] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:55.982 [2024-12-13 09:37:08.336697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:55.982 [2024-12-13 09:37:08.337109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.982 [2024-12-13 09:37:08.337129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:55.982 [2024-12-13 09:37:08.337136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:55.982 [2024-12-13 09:37:08.337303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:55.982 [2024-12-13 09:37:08.337478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:55.983 [2024-12-13 09:37:08.337487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:55.983 [2024-12-13 09:37:08.337493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:55.983 [2024-12-13 09:37:08.337499] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.243 [2024-12-13 09:37:08.349565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.243 [2024-12-13 09:37:08.349994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.243 [2024-12-13 09:37:08.350010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.243 [2024-12-13 09:37:08.350018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.243 [2024-12-13 09:37:08.350185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.243 [2024-12-13 09:37:08.350353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.243 [2024-12-13 09:37:08.350362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.243 [2024-12-13 09:37:08.350370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.243 [2024-12-13 09:37:08.350377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.243 [2024-12-13 09:37:08.362699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.243 [2024-12-13 09:37:08.363114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.243 [2024-12-13 09:37:08.363158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.243 [2024-12-13 09:37:08.363180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.243 [2024-12-13 09:37:08.363776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.243 [2024-12-13 09:37:08.364364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.243 [2024-12-13 09:37:08.364397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.243 [2024-12-13 09:37:08.364403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.243 [2024-12-13 09:37:08.364410] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.243 [2024-12-13 09:37:08.375736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.243 [2024-12-13 09:37:08.376143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.243 [2024-12-13 09:37:08.376159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.243 [2024-12-13 09:37:08.376166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.243 [2024-12-13 09:37:08.376337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.243 [2024-12-13 09:37:08.376527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.243 [2024-12-13 09:37:08.376536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.243 [2024-12-13 09:37:08.376542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.243 [2024-12-13 09:37:08.376549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.243 [2024-12-13 09:37:08.388569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.243 [2024-12-13 09:37:08.388986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.243 [2024-12-13 09:37:08.389002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.243 [2024-12-13 09:37:08.389009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.243 [2024-12-13 09:37:08.389177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.243 [2024-12-13 09:37:08.389348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.243 [2024-12-13 09:37:08.389356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.243 [2024-12-13 09:37:08.389362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.243 [2024-12-13 09:37:08.389368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.243 [2024-12-13 09:37:08.401414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.243 [2024-12-13 09:37:08.401809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.243 [2024-12-13 09:37:08.401845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.243 [2024-12-13 09:37:08.401870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.243 [2024-12-13 09:37:08.402420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.243 [2024-12-13 09:37:08.402613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.243 [2024-12-13 09:37:08.402622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.243 [2024-12-13 09:37:08.402628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.243 [2024-12-13 09:37:08.402634] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.243 [2024-12-13 09:37:08.414191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.243 [2024-12-13 09:37:08.414618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.243 [2024-12-13 09:37:08.414664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.243 [2024-12-13 09:37:08.414687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.243 [2024-12-13 09:37:08.415268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.243 [2024-12-13 09:37:08.415627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.243 [2024-12-13 09:37:08.415636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.243 [2024-12-13 09:37:08.415645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.243 [2024-12-13 09:37:08.415651] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.243 [2024-12-13 09:37:08.427005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.243 [2024-12-13 09:37:08.427410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.243 [2024-12-13 09:37:08.427467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.243 [2024-12-13 09:37:08.427492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.243 [2024-12-13 09:37:08.428074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.243 [2024-12-13 09:37:08.428577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.243 [2024-12-13 09:37:08.428591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.243 [2024-12-13 09:37:08.428597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.243 [2024-12-13 09:37:08.428603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.243 [2024-12-13 09:37:08.439839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.243 [2024-12-13 09:37:08.440207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.243 [2024-12-13 09:37:08.440223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.243 [2024-12-13 09:37:08.440229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.244 [2024-12-13 09:37:08.440396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.244 [2024-12-13 09:37:08.440570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.244 [2024-12-13 09:37:08.440578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.244 [2024-12-13 09:37:08.440585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.244 [2024-12-13 09:37:08.440590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.244 [2024-12-13 09:37:08.452660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.244 [2024-12-13 09:37:08.453052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.244 [2024-12-13 09:37:08.453067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.244 [2024-12-13 09:37:08.453074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.244 [2024-12-13 09:37:08.453232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.244 [2024-12-13 09:37:08.453391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.244 [2024-12-13 09:37:08.453398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.244 [2024-12-13 09:37:08.453404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.244 [2024-12-13 09:37:08.453410] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.244 [2024-12-13 09:37:08.465532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.244 [2024-12-13 09:37:08.465947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.244 [2024-12-13 09:37:08.465963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.244 [2024-12-13 09:37:08.465970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.244 [2024-12-13 09:37:08.466137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.244 [2024-12-13 09:37:08.466306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.244 [2024-12-13 09:37:08.466314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.244 [2024-12-13 09:37:08.466321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.244 [2024-12-13 09:37:08.466327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.244 [2024-12-13 09:37:08.478340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.244 [2024-12-13 09:37:08.478730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.244 [2024-12-13 09:37:08.478746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.244 [2024-12-13 09:37:08.478754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.244 [2024-12-13 09:37:08.478921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.244 [2024-12-13 09:37:08.479089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.244 [2024-12-13 09:37:08.479097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.244 [2024-12-13 09:37:08.479103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.244 [2024-12-13 09:37:08.479108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.244 [2024-12-13 09:37:08.491139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.244 [2024-12-13 09:37:08.491563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.244 [2024-12-13 09:37:08.491608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.244 [2024-12-13 09:37:08.491631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.244 [2024-12-13 09:37:08.492213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.244 [2024-12-13 09:37:08.492517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.244 [2024-12-13 09:37:08.492526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.244 [2024-12-13 09:37:08.492532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.244 [2024-12-13 09:37:08.492538] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.244 [2024-12-13 09:37:08.504140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.244 [2024-12-13 09:37:08.504567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.244 [2024-12-13 09:37:08.504588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.244 [2024-12-13 09:37:08.504595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.244 [2024-12-13 09:37:08.504765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.244 [2024-12-13 09:37:08.504924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.244 [2024-12-13 09:37:08.504931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.244 [2024-12-13 09:37:08.504937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.244 [2024-12-13 09:37:08.504943] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.244 [2024-12-13 09:37:08.516969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.244 [2024-12-13 09:37:08.517390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.244 [2024-12-13 09:37:08.517406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.244 [2024-12-13 09:37:08.517413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.244 [2024-12-13 09:37:08.517587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.244 [2024-12-13 09:37:08.517755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.244 [2024-12-13 09:37:08.517763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.244 [2024-12-13 09:37:08.517770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.244 [2024-12-13 09:37:08.517776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.244 [2024-12-13 09:37:08.529810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.244 [2024-12-13 09:37:08.530219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.244 [2024-12-13 09:37:08.530235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.244 [2024-12-13 09:37:08.530242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.244 [2024-12-13 09:37:08.530409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.244 [2024-12-13 09:37:08.530584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.244 [2024-12-13 09:37:08.530593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.244 [2024-12-13 09:37:08.530599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.244 [2024-12-13 09:37:08.530605] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.244 [2024-12-13 09:37:08.542872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.244 [2024-12-13 09:37:08.543306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.244 [2024-12-13 09:37:08.543322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.244 [2024-12-13 09:37:08.543329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.244 [2024-12-13 09:37:08.543510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.244 [2024-12-13 09:37:08.543683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.244 [2024-12-13 09:37:08.543691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.244 [2024-12-13 09:37:08.543697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.244 [2024-12-13 09:37:08.543704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.244 [2024-12-13 09:37:08.555719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.244 [2024-12-13 09:37:08.556091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.244 [2024-12-13 09:37:08.556107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.244 [2024-12-13 09:37:08.556114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.244 [2024-12-13 09:37:08.556281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.244 [2024-12-13 09:37:08.556454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.245 [2024-12-13 09:37:08.556463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.245 [2024-12-13 09:37:08.556469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.245 [2024-12-13 09:37:08.556491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.245 [2024-12-13 09:37:08.568704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.245 [2024-12-13 09:37:08.569165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.245 [2024-12-13 09:37:08.569181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.245 [2024-12-13 09:37:08.569188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.245 [2024-12-13 09:37:08.569355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.245 [2024-12-13 09:37:08.569532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.245 [2024-12-13 09:37:08.569540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.245 [2024-12-13 09:37:08.569547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.245 [2024-12-13 09:37:08.569553] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.245 [2024-12-13 09:37:08.581517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.245 [2024-12-13 09:37:08.581820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.245 [2024-12-13 09:37:08.581837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.245 [2024-12-13 09:37:08.581844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.245 [2024-12-13 09:37:08.582125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.245 [2024-12-13 09:37:08.582335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.245 [2024-12-13 09:37:08.582345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.245 [2024-12-13 09:37:08.582356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.245 [2024-12-13 09:37:08.582363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.245 [2024-12-13 09:37:08.594462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.245 [2024-12-13 09:37:08.594767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.245 [2024-12-13 09:37:08.594783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.245 [2024-12-13 09:37:08.594790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.245 [2024-12-13 09:37:08.594959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.245 [2024-12-13 09:37:08.595130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.245 [2024-12-13 09:37:08.595138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.245 [2024-12-13 09:37:08.595145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.245 [2024-12-13 09:37:08.595151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.245 [2024-12-13 09:37:08.607512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.245 [2024-12-13 09:37:08.607903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.245 [2024-12-13 09:37:08.607921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.245 [2024-12-13 09:37:08.607929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.245 [2024-12-13 09:37:08.608101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.245 [2024-12-13 09:37:08.608275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.245 [2024-12-13 09:37:08.608283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.245 [2024-12-13 09:37:08.608290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.245 [2024-12-13 09:37:08.608296] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.506 [2024-12-13 09:37:08.620576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.506 [2024-12-13 09:37:08.620990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.506 [2024-12-13 09:37:08.621006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.506 [2024-12-13 09:37:08.621043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.506 [2024-12-13 09:37:08.621642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.506 [2024-12-13 09:37:08.622184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.506 [2024-12-13 09:37:08.622192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.506 [2024-12-13 09:37:08.622198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.506 [2024-12-13 09:37:08.622205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.506 [2024-12-13 09:37:08.633591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.506 [2024-12-13 09:37:08.633958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.506 [2024-12-13 09:37:08.634004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.506 [2024-12-13 09:37:08.634028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.506 [2024-12-13 09:37:08.634624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.506 [2024-12-13 09:37:08.634874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.506 [2024-12-13 09:37:08.634881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.506 [2024-12-13 09:37:08.634888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.506 [2024-12-13 09:37:08.634894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.506 [2024-12-13 09:37:08.646554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.506 [2024-12-13 09:37:08.646893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.506 [2024-12-13 09:37:08.646910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.506 [2024-12-13 09:37:08.646917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.506 [2024-12-13 09:37:08.647089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.506 [2024-12-13 09:37:08.647264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.506 [2024-12-13 09:37:08.647272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.506 [2024-12-13 09:37:08.647278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.506 [2024-12-13 09:37:08.647284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.506 [2024-12-13 09:37:08.659837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.506 [2024-12-13 09:37:08.660283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.506 [2024-12-13 09:37:08.660302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.506 [2024-12-13 09:37:08.660310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.506 [2024-12-13 09:37:08.660511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.506 [2024-12-13 09:37:08.660708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.506 [2024-12-13 09:37:08.660717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.506 [2024-12-13 09:37:08.660725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.506 [2024-12-13 09:37:08.660731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.506 [2024-12-13 09:37:08.673276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.506 [2024-12-13 09:37:08.673738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.506 [2024-12-13 09:37:08.673761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.506 [2024-12-13 09:37:08.673770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.506 [2024-12-13 09:37:08.673979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.506 [2024-12-13 09:37:08.674189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.506 [2024-12-13 09:37:08.674199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.506 [2024-12-13 09:37:08.674206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.506 [2024-12-13 09:37:08.674214] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.506 [2024-12-13 09:37:08.687019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.506 [2024-12-13 09:37:08.687471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.506 [2024-12-13 09:37:08.687490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.506 [2024-12-13 09:37:08.687499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.506 [2024-12-13 09:37:08.687708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.506 [2024-12-13 09:37:08.687917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.506 [2024-12-13 09:37:08.687927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.506 [2024-12-13 09:37:08.687934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.506 [2024-12-13 09:37:08.687942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.506 [2024-12-13 09:37:08.700701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.506 [2024-12-13 09:37:08.701130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.506 [2024-12-13 09:37:08.701149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.506 [2024-12-13 09:37:08.701157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.506 [2024-12-13 09:37:08.701353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.506 [2024-12-13 09:37:08.701554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.506 [2024-12-13 09:37:08.701564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.506 [2024-12-13 09:37:08.701571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.506 [2024-12-13 09:37:08.701578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.506 [2024-12-13 09:37:08.714123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.506 [2024-12-13 09:37:08.714522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.506 [2024-12-13 09:37:08.714541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.506 [2024-12-13 09:37:08.714550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.506 [2024-12-13 09:37:08.714764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.506 [2024-12-13 09:37:08.714960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.506 [2024-12-13 09:37:08.714969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.506 [2024-12-13 09:37:08.714976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.506 [2024-12-13 09:37:08.714983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.506 [2024-12-13 09:37:08.727590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.506 [2024-12-13 09:37:08.728023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.506 [2024-12-13 09:37:08.728040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.506 [2024-12-13 09:37:08.728048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.506 [2024-12-13 09:37:08.728244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.506 [2024-12-13 09:37:08.728440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.506 [2024-12-13 09:37:08.728454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.507 [2024-12-13 09:37:08.728462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.507 [2024-12-13 09:37:08.728469] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.507 [2024-12-13 09:37:08.741023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.507 [2024-12-13 09:37:08.741413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.507 [2024-12-13 09:37:08.741431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.507 [2024-12-13 09:37:08.741484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.507 [2024-12-13 09:37:08.741694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.507 [2024-12-13 09:37:08.741904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.507 [2024-12-13 09:37:08.741914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.507 [2024-12-13 09:37:08.741922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.507 [2024-12-13 09:37:08.741929] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.507 [2024-12-13 09:37:08.754562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.507 [2024-12-13 09:37:08.754944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.507 [2024-12-13 09:37:08.754962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.507 [2024-12-13 09:37:08.754971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.507 [2024-12-13 09:37:08.755180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.507 [2024-12-13 09:37:08.755389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.507 [2024-12-13 09:37:08.755399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.507 [2024-12-13 09:37:08.755410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.507 [2024-12-13 09:37:08.755418] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.507 [2024-12-13 09:37:08.768268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.507 [2024-12-13 09:37:08.768719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.507 [2024-12-13 09:37:08.768738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.507 [2024-12-13 09:37:08.768746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.507 [2024-12-13 09:37:08.768955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.507 [2024-12-13 09:37:08.769165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.507 [2024-12-13 09:37:08.769174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.507 [2024-12-13 09:37:08.769182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.507 [2024-12-13 09:37:08.769190] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.507 [2024-12-13 09:37:08.782049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.507 [2024-12-13 09:37:08.782491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.507 [2024-12-13 09:37:08.782510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.507 [2024-12-13 09:37:08.782519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.507 [2024-12-13 09:37:08.782729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.507 [2024-12-13 09:37:08.782939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.507 [2024-12-13 09:37:08.782948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.507 [2024-12-13 09:37:08.782956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.507 [2024-12-13 09:37:08.782963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.507 [2024-12-13 09:37:08.795816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.507 [2024-12-13 09:37:08.796260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.507 [2024-12-13 09:37:08.796278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.507 [2024-12-13 09:37:08.796287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.507 [2024-12-13 09:37:08.796504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.507 [2024-12-13 09:37:08.796716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.507 [2024-12-13 09:37:08.796726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.507 [2024-12-13 09:37:08.796734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.507 [2024-12-13 09:37:08.796741] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.507 [2024-12-13 09:37:08.809462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.507 [2024-12-13 09:37:08.809907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.507 [2024-12-13 09:37:08.809926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.507 [2024-12-13 09:37:08.809934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.507 [2024-12-13 09:37:08.810143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.507 [2024-12-13 09:37:08.810353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.507 [2024-12-13 09:37:08.810363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.507 [2024-12-13 09:37:08.810371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.507 [2024-12-13 09:37:08.810378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.507 [2024-12-13 09:37:08.823228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.507 [2024-12-13 09:37:08.823656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.507 [2024-12-13 09:37:08.823675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.507 [2024-12-13 09:37:08.823684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.507 [2024-12-13 09:37:08.823892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.507 [2024-12-13 09:37:08.824103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.507 [2024-12-13 09:37:08.824112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.507 [2024-12-13 09:37:08.824120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.507 [2024-12-13 09:37:08.824128] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.507 [2024-12-13 09:37:08.836977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.507 [2024-12-13 09:37:08.837425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.507 [2024-12-13 09:37:08.837444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.507 [2024-12-13 09:37:08.837458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.507 [2024-12-13 09:37:08.837667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.507 [2024-12-13 09:37:08.837877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.507 [2024-12-13 09:37:08.837887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.507 [2024-12-13 09:37:08.837895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.507 [2024-12-13 09:37:08.837902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.507 [2024-12-13 09:37:08.850756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.507 [2024-12-13 09:37:08.851252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.507 [2024-12-13 09:37:08.851310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.507 [2024-12-13 09:37:08.851334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.507 [2024-12-13 09:37:08.851931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.507 [2024-12-13 09:37:08.852532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.507 [2024-12-13 09:37:08.852541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.507 [2024-12-13 09:37:08.852549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.507 [2024-12-13 09:37:08.852557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.507 [2024-12-13 09:37:08.864274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.507 [2024-12-13 09:37:08.864641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.507 [2024-12-13 09:37:08.864659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.507 [2024-12-13 09:37:08.864667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.507 [2024-12-13 09:37:08.864862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.507 [2024-12-13 09:37:08.865058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.507 [2024-12-13 09:37:08.865068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.508 [2024-12-13 09:37:08.865075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.508 [2024-12-13 09:37:08.865081] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.767 [2024-12-13 09:37:08.877486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.767 [2024-12-13 09:37:08.877922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-13 09:37:08.877967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.767 [2024-12-13 09:37:08.877990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.767 [2024-12-13 09:37:08.878548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.767 [2024-12-13 09:37:08.878723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.767 [2024-12-13 09:37:08.878731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.767 [2024-12-13 09:37:08.878738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.767 [2024-12-13 09:37:08.878744] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.767 6041.60 IOPS, 23.60 MiB/s [2024-12-13T08:37:09.133Z] [2024-12-13 09:37:08.890510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.767 [2024-12-13 09:37:08.890940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-13 09:37:08.890957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.767 [2024-12-13 09:37:08.890964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.767 [2024-12-13 09:37:08.891136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.767 [2024-12-13 09:37:08.891305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.767 [2024-12-13 09:37:08.891313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.767 [2024-12-13 09:37:08.891319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.767 [2024-12-13 09:37:08.891325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.767 [2024-12-13 09:37:08.903425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.767 [2024-12-13 09:37:08.903828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.767 [2024-12-13 09:37:08.903845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.767 [2024-12-13 09:37:08.903851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.767 [2024-12-13 09:37:08.904020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.768 [2024-12-13 09:37:08.904191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.768 [2024-12-13 09:37:08.904199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.768 [2024-12-13 09:37:08.904206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.768 [2024-12-13 09:37:08.904211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.768 [2024-12-13 09:37:08.916207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.768 [2024-12-13 09:37:08.916581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-13 09:37:08.916597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.768 [2024-12-13 09:37:08.916603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.768 [2024-12-13 09:37:08.916762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.768 [2024-12-13 09:37:08.916921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.768 [2024-12-13 09:37:08.916928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.768 [2024-12-13 09:37:08.916934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.768 [2024-12-13 09:37:08.916940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.768 [2024-12-13 09:37:08.929067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.768 [2024-12-13 09:37:08.929464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-13 09:37:08.929479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.768 [2024-12-13 09:37:08.929486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.768 [2024-12-13 09:37:08.929644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.768 [2024-12-13 09:37:08.929802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.768 [2024-12-13 09:37:08.929813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.768 [2024-12-13 09:37:08.929819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.768 [2024-12-13 09:37:08.929825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.768 [2024-12-13 09:37:08.941870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.768 [2024-12-13 09:37:08.942271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-13 09:37:08.942314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.768 [2024-12-13 09:37:08.942337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.768 [2024-12-13 09:37:08.942805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.768 [2024-12-13 09:37:08.942974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.768 [2024-12-13 09:37:08.942982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.768 [2024-12-13 09:37:08.942988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.768 [2024-12-13 09:37:08.942994] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.768 [2024-12-13 09:37:08.954682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.768 [2024-12-13 09:37:08.955074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-13 09:37:08.955090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.768 [2024-12-13 09:37:08.955096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.768 [2024-12-13 09:37:08.955255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.768 [2024-12-13 09:37:08.955434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.768 [2024-12-13 09:37:08.955441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.768 [2024-12-13 09:37:08.955452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.768 [2024-12-13 09:37:08.955459] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.768 [2024-12-13 09:37:08.967501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.768 [2024-12-13 09:37:08.967850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-13 09:37:08.967894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.768 [2024-12-13 09:37:08.967916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.768 [2024-12-13 09:37:08.968504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.768 [2024-12-13 09:37:08.968673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.768 [2024-12-13 09:37:08.968681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.768 [2024-12-13 09:37:08.968687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.768 [2024-12-13 09:37:08.968693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.768 [2024-12-13 09:37:08.980292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.768 [2024-12-13 09:37:08.980701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-13 09:37:08.980718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.768 [2024-12-13 09:37:08.980725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.768 [2024-12-13 09:37:08.980892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.768 [2024-12-13 09:37:08.981059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.768 [2024-12-13 09:37:08.981067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.768 [2024-12-13 09:37:08.981073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.768 [2024-12-13 09:37:08.981079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.768 [2024-12-13 09:37:08.993069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.768 [2024-12-13 09:37:08.993464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-13 09:37:08.993480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.768 [2024-12-13 09:37:08.993487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.768 [2024-12-13 09:37:08.993645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.768 [2024-12-13 09:37:08.993804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.768 [2024-12-13 09:37:08.993812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.768 [2024-12-13 09:37:08.993818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.768 [2024-12-13 09:37:08.993823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.768 [2024-12-13 09:37:09.005881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.768 [2024-12-13 09:37:09.006274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-13 09:37:09.006290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.768 [2024-12-13 09:37:09.006296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.768 [2024-12-13 09:37:09.006461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.768 [2024-12-13 09:37:09.006645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.768 [2024-12-13 09:37:09.006653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.768 [2024-12-13 09:37:09.006659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.768 [2024-12-13 09:37:09.006665] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.768 [2024-12-13 09:37:09.018649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.768 [2024-12-13 09:37:09.019042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-13 09:37:09.019062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.768 [2024-12-13 09:37:09.019069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.768 [2024-12-13 09:37:09.019237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.768 [2024-12-13 09:37:09.019405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.768 [2024-12-13 09:37:09.019412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.768 [2024-12-13 09:37:09.019419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.768 [2024-12-13 09:37:09.019425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.768 [2024-12-13 09:37:09.031470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.768 [2024-12-13 09:37:09.031858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.768 [2024-12-13 09:37:09.031874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.768 [2024-12-13 09:37:09.031881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.768 [2024-12-13 09:37:09.032040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.768 [2024-12-13 09:37:09.032198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.769 [2024-12-13 09:37:09.032206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.769 [2024-12-13 09:37:09.032212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.769 [2024-12-13 09:37:09.032217] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.769 [2024-12-13 09:37:09.044207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.769 [2024-12-13 09:37:09.044601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.769 [2024-12-13 09:37:09.044617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.769 [2024-12-13 09:37:09.044623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.769 [2024-12-13 09:37:09.044782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.769 [2024-12-13 09:37:09.044941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.769 [2024-12-13 09:37:09.044948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.769 [2024-12-13 09:37:09.044954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.769 [2024-12-13 09:37:09.044960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.769 [2024-12-13 09:37:09.057038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.769 [2024-12-13 09:37:09.057464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.769 [2024-12-13 09:37:09.057480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.769 [2024-12-13 09:37:09.057486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.769 [2024-12-13 09:37:09.057649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.769 [2024-12-13 09:37:09.057808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.769 [2024-12-13 09:37:09.057815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.769 [2024-12-13 09:37:09.057821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.769 [2024-12-13 09:37:09.057827] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.769 [2024-12-13 09:37:09.069893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.769 [2024-12-13 09:37:09.070264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.769 [2024-12-13 09:37:09.070280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.769 [2024-12-13 09:37:09.070287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.769 [2024-12-13 09:37:09.070445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.769 [2024-12-13 09:37:09.070634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.769 [2024-12-13 09:37:09.070642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.769 [2024-12-13 09:37:09.070648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.769 [2024-12-13 09:37:09.070654] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.769 [2024-12-13 09:37:09.082716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.769 [2024-12-13 09:37:09.083127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.769 [2024-12-13 09:37:09.083144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.769 [2024-12-13 09:37:09.083151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.769 [2024-12-13 09:37:09.083319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.769 [2024-12-13 09:37:09.083492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.769 [2024-12-13 09:37:09.083501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.769 [2024-12-13 09:37:09.083507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.769 [2024-12-13 09:37:09.083513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.769 [2024-12-13 09:37:09.095541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.769 [2024-12-13 09:37:09.095965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.769 [2024-12-13 09:37:09.095981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.769 [2024-12-13 09:37:09.095988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.769 [2024-12-13 09:37:09.096146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.769 [2024-12-13 09:37:09.096305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.769 [2024-12-13 09:37:09.096316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.769 [2024-12-13 09:37:09.096322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.769 [2024-12-13 09:37:09.096327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.769 [2024-12-13 09:37:09.108323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.769 [2024-12-13 09:37:09.108736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.769 [2024-12-13 09:37:09.108753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.769 [2024-12-13 09:37:09.108760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.769 [2024-12-13 09:37:09.108928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.769 [2024-12-13 09:37:09.109096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.769 [2024-12-13 09:37:09.109104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.769 [2024-12-13 09:37:09.109110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.769 [2024-12-13 09:37:09.109116] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:56.769 [2024-12-13 09:37:09.121105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:56.769 [2024-12-13 09:37:09.121528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.769 [2024-12-13 09:37:09.121545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:56.769 [2024-12-13 09:37:09.121552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:56.769 [2024-12-13 09:37:09.121725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:56.769 [2024-12-13 09:37:09.121899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:56.769 [2024-12-13 09:37:09.121907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:56.769 [2024-12-13 09:37:09.121914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:56.769 [2024-12-13 09:37:09.121920] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.029 [2024-12-13 09:37:09.134156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.029 [2024-12-13 09:37:09.134562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.029 [2024-12-13 09:37:09.134578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.029 [2024-12-13 09:37:09.134585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.029 [2024-12-13 09:37:09.134758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.029 [2024-12-13 09:37:09.134932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.029 [2024-12-13 09:37:09.134939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.029 [2024-12-13 09:37:09.134946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.029 [2024-12-13 09:37:09.134952] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.029 [2024-12-13 09:37:09.147109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.029 [2024-12-13 09:37:09.147445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.029 [2024-12-13 09:37:09.147467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.029 [2024-12-13 09:37:09.147474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.029 [2024-12-13 09:37:09.147647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.029 [2024-12-13 09:37:09.147826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.029 [2024-12-13 09:37:09.147834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.029 [2024-12-13 09:37:09.147840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.029 [2024-12-13 09:37:09.147846] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.029 [2024-12-13 09:37:09.159983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.029 [2024-12-13 09:37:09.160339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.029 [2024-12-13 09:37:09.160356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.029 [2024-12-13 09:37:09.160363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.029 [2024-12-13 09:37:09.160536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.029 [2024-12-13 09:37:09.160705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.029 [2024-12-13 09:37:09.160713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.029 [2024-12-13 09:37:09.160719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.029 [2024-12-13 09:37:09.160726] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.029 [2024-12-13 09:37:09.172846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.029 [2024-12-13 09:37:09.173271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.029 [2024-12-13 09:37:09.173313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.029 [2024-12-13 09:37:09.173336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.030 [2024-12-13 09:37:09.173933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.030 [2024-12-13 09:37:09.174362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.030 [2024-12-13 09:37:09.174370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.030 [2024-12-13 09:37:09.174376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.030 [2024-12-13 09:37:09.174382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.030 [2024-12-13 09:37:09.185719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.030 [2024-12-13 09:37:09.186138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.030 [2024-12-13 09:37:09.186157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.030 [2024-12-13 09:37:09.186164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.030 [2024-12-13 09:37:09.186332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.030 [2024-12-13 09:37:09.186505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.030 [2024-12-13 09:37:09.186513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.030 [2024-12-13 09:37:09.186519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.030 [2024-12-13 09:37:09.186525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.030 [2024-12-13 09:37:09.198516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.030 [2024-12-13 09:37:09.198870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.030 [2024-12-13 09:37:09.198884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.030 [2024-12-13 09:37:09.198891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.030 [2024-12-13 09:37:09.199049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.030 [2024-12-13 09:37:09.199209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.030 [2024-12-13 09:37:09.199216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.030 [2024-12-13 09:37:09.199222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.030 [2024-12-13 09:37:09.199228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.030 [2024-12-13 09:37:09.211335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.030 [2024-12-13 09:37:09.211747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.030 [2024-12-13 09:37:09.211763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.030 [2024-12-13 09:37:09.211770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.030 [2024-12-13 09:37:09.211938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.030 [2024-12-13 09:37:09.212106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.030 [2024-12-13 09:37:09.212114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.030 [2024-12-13 09:37:09.212120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.030 [2024-12-13 09:37:09.212126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.030 [2024-12-13 09:37:09.224177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.030 [2024-12-13 09:37:09.224569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.030 [2024-12-13 09:37:09.224585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.030 [2024-12-13 09:37:09.224592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.030 [2024-12-13 09:37:09.224754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.030 [2024-12-13 09:37:09.224913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.030 [2024-12-13 09:37:09.224920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.030 [2024-12-13 09:37:09.224926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.030 [2024-12-13 09:37:09.224932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.030 [2024-12-13 09:37:09.237032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.030 [2024-12-13 09:37:09.237430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.030 [2024-12-13 09:37:09.237446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.030 [2024-12-13 09:37:09.237458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.030 [2024-12-13 09:37:09.237641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.030 [2024-12-13 09:37:09.237809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.030 [2024-12-13 09:37:09.237817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.030 [2024-12-13 09:37:09.237823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.030 [2024-12-13 09:37:09.237829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.030 [2024-12-13 09:37:09.249807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.030 [2024-12-13 09:37:09.250206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.030 [2024-12-13 09:37:09.250251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.030 [2024-12-13 09:37:09.250273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.030 [2024-12-13 09:37:09.250872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.030 [2024-12-13 09:37:09.251470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.030 [2024-12-13 09:37:09.251495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.030 [2024-12-13 09:37:09.251526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.030 [2024-12-13 09:37:09.251533] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.030 [2024-12-13 09:37:09.262657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.030 [2024-12-13 09:37:09.262976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.030 [2024-12-13 09:37:09.262992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.030 [2024-12-13 09:37:09.262999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.030 [2024-12-13 09:37:09.263158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.030 [2024-12-13 09:37:09.263317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.030 [2024-12-13 09:37:09.263327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.030 [2024-12-13 09:37:09.263333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.030 [2024-12-13 09:37:09.263338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.030 [2024-12-13 09:37:09.275498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.030 [2024-12-13 09:37:09.275891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.030 [2024-12-13 09:37:09.275907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.030 [2024-12-13 09:37:09.275913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.030 [2024-12-13 09:37:09.276072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.030 [2024-12-13 09:37:09.276231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.030 [2024-12-13 09:37:09.276239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.030 [2024-12-13 09:37:09.276245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.030 [2024-12-13 09:37:09.276250] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.030 [2024-12-13 09:37:09.288240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.030 [2024-12-13 09:37:09.288656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.030 [2024-12-13 09:37:09.288673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.030 [2024-12-13 09:37:09.288680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.030 [2024-12-13 09:37:09.288847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.030 [2024-12-13 09:37:09.289015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.030 [2024-12-13 09:37:09.289023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.030 [2024-12-13 09:37:09.289029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.030 [2024-12-13 09:37:09.289036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.030 [2024-12-13 09:37:09.301039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.030 [2024-12-13 09:37:09.301425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.030 [2024-12-13 09:37:09.301441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.031 [2024-12-13 09:37:09.301453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.031 [2024-12-13 09:37:09.301635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.031 [2024-12-13 09:37:09.301805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.031 [2024-12-13 09:37:09.301813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.031 [2024-12-13 09:37:09.301819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.031 [2024-12-13 09:37:09.301825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.031 [2024-12-13 09:37:09.313841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.031 [2024-12-13 09:37:09.314233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.031 [2024-12-13 09:37:09.314264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.031 [2024-12-13 09:37:09.314287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.031 [2024-12-13 09:37:09.314885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.031 [2024-12-13 09:37:09.315482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.031 [2024-12-13 09:37:09.315509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.031 [2024-12-13 09:37:09.315529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.031 [2024-12-13 09:37:09.315548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.031 [2024-12-13 09:37:09.326569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.031 [2024-12-13 09:37:09.326983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.031 [2024-12-13 09:37:09.326999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.031 [2024-12-13 09:37:09.327006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.031 [2024-12-13 09:37:09.327173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.031 [2024-12-13 09:37:09.327341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.031 [2024-12-13 09:37:09.327349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.031 [2024-12-13 09:37:09.327355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.031 [2024-12-13 09:37:09.327361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.031 [2024-12-13 09:37:09.339397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.031 [2024-12-13 09:37:09.339792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.031 [2024-12-13 09:37:09.339808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.031 [2024-12-13 09:37:09.339815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.031 [2024-12-13 09:37:09.339974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.031 [2024-12-13 09:37:09.340132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.031 [2024-12-13 09:37:09.340140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.031 [2024-12-13 09:37:09.340146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.031 [2024-12-13 09:37:09.340151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.031 [2024-12-13 09:37:09.352186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.031 [2024-12-13 09:37:09.352575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.031 [2024-12-13 09:37:09.352594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.031 [2024-12-13 09:37:09.352601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.031 [2024-12-13 09:37:09.352760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.031 [2024-12-13 09:37:09.352918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.031 [2024-12-13 09:37:09.352926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.031 [2024-12-13 09:37:09.352932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.031 [2024-12-13 09:37:09.352937] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.031 [2024-12-13 09:37:09.365076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.031 [2024-12-13 09:37:09.365495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.031 [2024-12-13 09:37:09.365539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.031 [2024-12-13 09:37:09.365562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.031 [2024-12-13 09:37:09.365997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.031 [2024-12-13 09:37:09.366156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.031 [2024-12-13 09:37:09.366163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.031 [2024-12-13 09:37:09.366169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.031 [2024-12-13 09:37:09.366175] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.031 [2024-12-13 09:37:09.377927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.031 [2024-12-13 09:37:09.378340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.031 [2024-12-13 09:37:09.378356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.031 [2024-12-13 09:37:09.378363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.031 [2024-12-13 09:37:09.378554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.031 [2024-12-13 09:37:09.378728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.031 [2024-12-13 09:37:09.378736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.031 [2024-12-13 09:37:09.378743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.031 [2024-12-13 09:37:09.378750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.031 [2024-12-13 09:37:09.391005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.031 [2024-12-13 09:37:09.391413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.031 [2024-12-13 09:37:09.391430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.031 [2024-12-13 09:37:09.391437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.031 [2024-12-13 09:37:09.391617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.031 [2024-12-13 09:37:09.391790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.031 [2024-12-13 09:37:09.391798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.031 [2024-12-13 09:37:09.391805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.031 [2024-12-13 09:37:09.391811] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.292 [2024-12-13 09:37:09.404040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.292 [2024-12-13 09:37:09.404434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.292 [2024-12-13 09:37:09.404491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.292 [2024-12-13 09:37:09.404514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.292 [2024-12-13 09:37:09.404937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.292 [2024-12-13 09:37:09.405110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.292 [2024-12-13 09:37:09.405118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.292 [2024-12-13 09:37:09.405124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.292 [2024-12-13 09:37:09.405130] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.292 [2024-12-13 09:37:09.417007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.292 [2024-12-13 09:37:09.417412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.292 [2024-12-13 09:37:09.417429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.292 [2024-12-13 09:37:09.417435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.292 [2024-12-13 09:37:09.417637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.292 [2024-12-13 09:37:09.417806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.292 [2024-12-13 09:37:09.417814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.292 [2024-12-13 09:37:09.417820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.292 [2024-12-13 09:37:09.417826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.292 [2024-12-13 09:37:09.429755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.292 [2024-12-13 09:37:09.430079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.292 [2024-12-13 09:37:09.430095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.292 [2024-12-13 09:37:09.430102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.292 [2024-12-13 09:37:09.430260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.292 [2024-12-13 09:37:09.430419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.292 [2024-12-13 09:37:09.430426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.292 [2024-12-13 09:37:09.430436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.292 [2024-12-13 09:37:09.430441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.292 [2024-12-13 09:37:09.442725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.292 [2024-12-13 09:37:09.443150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.292 [2024-12-13 09:37:09.443195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.292 [2024-12-13 09:37:09.443217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.292 [2024-12-13 09:37:09.443738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.292 [2024-12-13 09:37:09.443912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.292 [2024-12-13 09:37:09.443920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.292 [2024-12-13 09:37:09.443926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.292 [2024-12-13 09:37:09.443932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.292 [2024-12-13 09:37:09.455485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.292 [2024-12-13 09:37:09.455898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.292 [2024-12-13 09:37:09.455914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.292 [2024-12-13 09:37:09.455921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.292 [2024-12-13 09:37:09.456089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.292 [2024-12-13 09:37:09.456257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.292 [2024-12-13 09:37:09.456264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.292 [2024-12-13 09:37:09.456271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.292 [2024-12-13 09:37:09.456276] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.292 [2024-12-13 09:37:09.468304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.292 [2024-12-13 09:37:09.468717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.292 [2024-12-13 09:37:09.468734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.292 [2024-12-13 09:37:09.468741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.292 [2024-12-13 09:37:09.468909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.292 [2024-12-13 09:37:09.469077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.292 [2024-12-13 09:37:09.469085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.292 [2024-12-13 09:37:09.469091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.292 [2024-12-13 09:37:09.469097] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.292 [2024-12-13 09:37:09.481097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.292 [2024-12-13 09:37:09.481426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.292 [2024-12-13 09:37:09.481442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.292 [2024-12-13 09:37:09.481454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.292 [2024-12-13 09:37:09.481637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.292 [2024-12-13 09:37:09.481805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.292 [2024-12-13 09:37:09.481813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.292 [2024-12-13 09:37:09.481819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.292 [2024-12-13 09:37:09.481825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.292 [2024-12-13 09:37:09.493952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.292 [2024-12-13 09:37:09.494338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.292 [2024-12-13 09:37:09.494354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.292 [2024-12-13 09:37:09.494361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.292 [2024-12-13 09:37:09.494543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.292 [2024-12-13 09:37:09.494712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.292 [2024-12-13 09:37:09.494720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.292 [2024-12-13 09:37:09.494726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.292 [2024-12-13 09:37:09.494732] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.292 [2024-12-13 09:37:09.506805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.292 [2024-12-13 09:37:09.507195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.292 [2024-12-13 09:37:09.507210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.292 [2024-12-13 09:37:09.507217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.292 [2024-12-13 09:37:09.507376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.292 [2024-12-13 09:37:09.507560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.292 [2024-12-13 09:37:09.507569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.292 [2024-12-13 09:37:09.507575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.292 [2024-12-13 09:37:09.507581] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.292 [2024-12-13 09:37:09.519557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.292 [2024-12-13 09:37:09.519944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.292 [2024-12-13 09:37:09.519963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.292 [2024-12-13 09:37:09.519970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.292 [2024-12-13 09:37:09.520130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.292 [2024-12-13 09:37:09.520289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.293 [2024-12-13 09:37:09.520296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.293 [2024-12-13 09:37:09.520302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.293 [2024-12-13 09:37:09.520308] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3474030 Killed "${NVMF_APP[@]}" "$@" 00:25:57.293 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:57.293 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:57.293 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:57.293 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:57.293 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.293 [2024-12-13 09:37:09.532530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.293 [2024-12-13 09:37:09.532925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.293 [2024-12-13 09:37:09.532941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.293 [2024-12-13 09:37:09.532947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.293 [2024-12-13 09:37:09.533115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.293 [2024-12-13 09:37:09.533283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.293 [2024-12-13 09:37:09.533291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.293 [2024-12-13 09:37:09.533297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.293 [2024-12-13 09:37:09.533303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.293 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3475318 00:25:57.293 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3475318 00:25:57.293 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:57.293 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3475318 ']' 00:25:57.293 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.293 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.293 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.293 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.293 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.293 [2024-12-13 09:37:09.545526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.293 [2024-12-13 09:37:09.545920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.293 [2024-12-13 09:37:09.545937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.293 [2024-12-13 09:37:09.545944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.293 [2024-12-13 09:37:09.546117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.293 [2024-12-13 09:37:09.546290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.293 [2024-12-13 09:37:09.546298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.293 [2024-12-13 09:37:09.546304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.293 [2024-12-13 09:37:09.546310] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.293 [2024-12-13 09:37:09.558514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.293 [2024-12-13 09:37:09.558923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.293 [2024-12-13 09:37:09.558941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.293 [2024-12-13 09:37:09.558948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.293 [2024-12-13 09:37:09.559121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.293 [2024-12-13 09:37:09.559294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.293 [2024-12-13 09:37:09.559302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.293 [2024-12-13 09:37:09.559309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.293 [2024-12-13 09:37:09.559315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.293 [2024-12-13 09:37:09.571528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.293 [2024-12-13 09:37:09.571934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.293 [2024-12-13 09:37:09.571951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.293 [2024-12-13 09:37:09.571959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.293 [2024-12-13 09:37:09.572132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.293 [2024-12-13 09:37:09.572306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.293 [2024-12-13 09:37:09.572315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.293 [2024-12-13 09:37:09.572321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.293 [2024-12-13 09:37:09.572328] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.293 [2024-12-13 09:37:09.584752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.293 [2024-12-13 09:37:09.585148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.293 [2024-12-13 09:37:09.585166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.293 [2024-12-13 09:37:09.585177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.293 [2024-12-13 09:37:09.585263] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:25:57.293 [2024-12-13 09:37:09.585302] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.293 [2024-12-13 09:37:09.585351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.293 [2024-12-13 09:37:09.585531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.293 [2024-12-13 09:37:09.585539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.293 [2024-12-13 09:37:09.585546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.293 [2024-12-13 09:37:09.585552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.293 [2024-12-13 09:37:09.597814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.293 [2024-12-13 09:37:09.598228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.293 [2024-12-13 09:37:09.598245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.293 [2024-12-13 09:37:09.598253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.293 [2024-12-13 09:37:09.598426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.293 [2024-12-13 09:37:09.598605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.293 [2024-12-13 09:37:09.598614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.293 [2024-12-13 09:37:09.598621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.293 [2024-12-13 09:37:09.598627] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.293 [2024-12-13 09:37:09.610724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.293 [2024-12-13 09:37:09.611114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.293 [2024-12-13 09:37:09.611131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.293 [2024-12-13 09:37:09.611138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.293 [2024-12-13 09:37:09.611311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.293 [2024-12-13 09:37:09.611490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.293 [2024-12-13 09:37:09.611499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.293 [2024-12-13 09:37:09.611506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.293 [2024-12-13 09:37:09.611513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.293 [2024-12-13 09:37:09.623730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.293 [2024-12-13 09:37:09.624132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.293 [2024-12-13 09:37:09.624149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.293 [2024-12-13 09:37:09.624160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.293 [2024-12-13 09:37:09.624333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.293 [2024-12-13 09:37:09.624512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.293 [2024-12-13 09:37:09.624521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.293 [2024-12-13 09:37:09.624528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.294 [2024-12-13 09:37:09.624534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.294 [2024-12-13 09:37:09.636757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.294 [2024-12-13 09:37:09.637214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.294 [2024-12-13 09:37:09.637230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.294 [2024-12-13 09:37:09.637238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.294 [2024-12-13 09:37:09.637411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.294 [2024-12-13 09:37:09.637590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.294 [2024-12-13 09:37:09.637599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.294 [2024-12-13 09:37:09.637606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.294 [2024-12-13 09:37:09.637612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.294 [2024-12-13 09:37:09.649828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.294 [2024-12-13 09:37:09.650241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.294 [2024-12-13 09:37:09.650257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.294 [2024-12-13 09:37:09.650265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.294 [2024-12-13 09:37:09.650437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.294 [2024-12-13 09:37:09.650616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.294 [2024-12-13 09:37:09.650625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.294 [2024-12-13 09:37:09.650632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.294 [2024-12-13 09:37:09.650638] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.294 [2024-12-13 09:37:09.653757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:57.554 [2024-12-13 09:37:09.662843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.554 [2024-12-13 09:37:09.663296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-12-13 09:37:09.663315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.554 [2024-12-13 09:37:09.663324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.554 [2024-12-13 09:37:09.663510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.554 [2024-12-13 09:37:09.663684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.554 [2024-12-13 09:37:09.663693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.554 [2024-12-13 09:37:09.663701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.554 [2024-12-13 09:37:09.663707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.554 [2024-12-13 09:37:09.675884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.554 [2024-12-13 09:37:09.676250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-12-13 09:37:09.676267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.554 [2024-12-13 09:37:09.676275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.554 [2024-12-13 09:37:09.676455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.554 [2024-12-13 09:37:09.676634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.554 [2024-12-13 09:37:09.676642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.554 [2024-12-13 09:37:09.676650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.554 [2024-12-13 09:37:09.676657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.554 [2024-12-13 09:37:09.688877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.554 [2024-12-13 09:37:09.689256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-12-13 09:37:09.689273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.554 [2024-12-13 09:37:09.689281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.554 [2024-12-13 09:37:09.689461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.554 [2024-12-13 09:37:09.689635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.554 [2024-12-13 09:37:09.689643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.554 [2024-12-13 09:37:09.689650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.554 [2024-12-13 09:37:09.689656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.554 [2024-12-13 09:37:09.695196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.554 [2024-12-13 09:37:09.695220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.554 [2024-12-13 09:37:09.695227] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.554 [2024-12-13 09:37:09.695233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.554 [2024-12-13 09:37:09.695238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.554 [2024-12-13 09:37:09.696426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.554 [2024-12-13 09:37:09.696515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:57.554 [2024-12-13 09:37:09.696516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.554 [2024-12-13 09:37:09.701878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.554 [2024-12-13 09:37:09.702306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-12-13 09:37:09.702325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.554 [2024-12-13 09:37:09.702333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.554 [2024-12-13 09:37:09.702513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.554 [2024-12-13 09:37:09.702688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.554 [2024-12-13 09:37:09.702697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.554 [2024-12-13 09:37:09.702704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.554 [2024-12-13 09:37:09.702711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.554 [2024-12-13 09:37:09.714926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.554 [2024-12-13 09:37:09.715368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-12-13 09:37:09.715388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.554 [2024-12-13 09:37:09.715396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.554 [2024-12-13 09:37:09.715577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.554 [2024-12-13 09:37:09.715751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.554 [2024-12-13 09:37:09.715760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.554 [2024-12-13 09:37:09.715767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.554 [2024-12-13 09:37:09.715774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.554 [2024-12-13 09:37:09.728002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.554 [2024-12-13 09:37:09.728463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-12-13 09:37:09.728484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.554 [2024-12-13 09:37:09.728492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.555 [2024-12-13 09:37:09.728667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.555 [2024-12-13 09:37:09.728840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.555 [2024-12-13 09:37:09.728849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.555 [2024-12-13 09:37:09.728856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.555 [2024-12-13 09:37:09.728863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.555 [2024-12-13 09:37:09.741081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.555 [2024-12-13 09:37:09.741512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-12-13 09:37:09.741533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.555 [2024-12-13 09:37:09.741548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.555 [2024-12-13 09:37:09.741722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.555 [2024-12-13 09:37:09.741897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.555 [2024-12-13 09:37:09.741905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.555 [2024-12-13 09:37:09.741913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.555 [2024-12-13 09:37:09.741920] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.555 [2024-12-13 09:37:09.754135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.555 [2024-12-13 09:37:09.754563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-12-13 09:37:09.754584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.555 [2024-12-13 09:37:09.754593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.555 [2024-12-13 09:37:09.754768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.555 [2024-12-13 09:37:09.754941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.555 [2024-12-13 09:37:09.754950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.555 [2024-12-13 09:37:09.754957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.555 [2024-12-13 09:37:09.754965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.555 [2024-12-13 09:37:09.767169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.555 [2024-12-13 09:37:09.767564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-12-13 09:37:09.767581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.555 [2024-12-13 09:37:09.767589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.555 [2024-12-13 09:37:09.767763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.555 [2024-12-13 09:37:09.767937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.555 [2024-12-13 09:37:09.767945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.555 [2024-12-13 09:37:09.767952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.555 [2024-12-13 09:37:09.767958] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.555 [2024-12-13 09:37:09.780169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.555 [2024-12-13 09:37:09.780605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-12-13 09:37:09.780622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.555 [2024-12-13 09:37:09.780630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.555 [2024-12-13 09:37:09.780803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.555 [2024-12-13 09:37:09.780981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.555 [2024-12-13 09:37:09.780990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.555 [2024-12-13 09:37:09.780996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.555 [2024-12-13 09:37:09.781002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.555 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:57.555 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:57.555 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:57.555 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:57.555 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.555 [2024-12-13 09:37:09.793205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.555 [2024-12-13 09:37:09.793640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-12-13 09:37:09.793658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.555 [2024-12-13 09:37:09.793666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.555 [2024-12-13 09:37:09.793840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.555 [2024-12-13 09:37:09.794015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.555 [2024-12-13 09:37:09.794023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.555 [2024-12-13 09:37:09.794030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.555 [2024-12-13 09:37:09.794036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.555 [2024-12-13 09:37:09.806262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.555 [2024-12-13 09:37:09.806564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-12-13 09:37:09.806581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.555 [2024-12-13 09:37:09.806588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.555 [2024-12-13 09:37:09.806761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.555 [2024-12-13 09:37:09.806935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.555 [2024-12-13 09:37:09.806943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.555 [2024-12-13 09:37:09.806950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.555 [2024-12-13 09:37:09.806956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.555 [2024-12-13 09:37:09.819321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.555 [2024-12-13 09:37:09.819736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-12-13 09:37:09.819754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.555 [2024-12-13 09:37:09.819762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.555 [2024-12-13 09:37:09.819939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.555 [2024-12-13 09:37:09.820113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.555 [2024-12-13 09:37:09.820121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.555 [2024-12-13 09:37:09.820128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.555 [2024-12-13 09:37:09.820134] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.555 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.555 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:57.555 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.555 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.555 [2024-12-13 09:37:09.832339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.555 [2024-12-13 09:37:09.832747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-12-13 09:37:09.832764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.555 [2024-12-13 09:37:09.832771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.555 [2024-12-13 09:37:09.832943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.555 [2024-12-13 09:37:09.832976] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.555 [2024-12-13 09:37:09.833120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.555 [2024-12-13 09:37:09.833129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.555 [2024-12-13 09:37:09.833136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.555 [2024-12-13 09:37:09.833142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.555 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.555 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:57.555 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.555 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.555 [2024-12-13 09:37:09.845354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.555 [2024-12-13 09:37:09.845769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-12-13 09:37:09.845786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.556 [2024-12-13 09:37:09.845793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.556 [2024-12-13 09:37:09.845966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.556 [2024-12-13 09:37:09.846139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.556 [2024-12-13 09:37:09.846147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.556 [2024-12-13 09:37:09.846154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.556 [2024-12-13 09:37:09.846164] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.556 [2024-12-13 09:37:09.858375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.556 [2024-12-13 09:37:09.858744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-12-13 09:37:09.858761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.556 [2024-12-13 09:37:09.858768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.556 [2024-12-13 09:37:09.858940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.556 [2024-12-13 09:37:09.859114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.556 [2024-12-13 09:37:09.859122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.556 [2024-12-13 09:37:09.859129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.556 [2024-12-13 09:37:09.859135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.556 [2024-12-13 09:37:09.871345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.556 [2024-12-13 09:37:09.871712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-12-13 09:37:09.871729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.556 [2024-12-13 09:37:09.871736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.556 [2024-12-13 09:37:09.871910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.556 [2024-12-13 09:37:09.872083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.556 [2024-12-13 09:37:09.872092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.556 [2024-12-13 09:37:09.872098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.556 [2024-12-13 09:37:09.872104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.556 Malloc0 00:25:57.556 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.556 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:57.556 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.556 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.556 [2024-12-13 09:37:09.885702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.556 5034.67 IOPS, 19.67 MiB/s [2024-12-13T08:37:09.922Z] [2024-12-13 09:37:09.886020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-12-13 09:37:09.886036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afd7e0 with addr=10.0.0.2, port=4420 00:25:57.556 [2024-12-13 09:37:09.886043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd7e0 is same with the state(6) to be set 00:25:57.556 [2024-12-13 09:37:09.886216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afd7e0 (9): Bad file descriptor 00:25:57.556 [2024-12-13 09:37:09.886389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:57.556 [2024-12-13 09:37:09.886397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:57.556 [2024-12-13 09:37:09.886407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:57.556 [2024-12-13 09:37:09.886413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:57.556 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.556 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:57.556 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.556 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.556 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.556 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.556 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.556 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:57.556 [2024-12-13 09:37:09.897879] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.556 [2024-12-13 09:37:09.898843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:57.556 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.556 09:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3474417 00:25:57.814 [2024-12-13 09:37:10.014956] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:25:59.686 5699.14 IOPS, 22.26 MiB/s [2024-12-13T08:37:12.988Z] 6390.00 IOPS, 24.96 MiB/s [2024-12-13T08:37:13.924Z] 6925.00 IOPS, 27.05 MiB/s [2024-12-13T08:37:15.298Z] 7355.20 IOPS, 28.73 MiB/s [2024-12-13T08:37:16.232Z] 7708.36 IOPS, 30.11 MiB/s [2024-12-13T08:37:17.167Z] 8013.42 IOPS, 31.30 MiB/s [2024-12-13T08:37:18.101Z] 8264.15 IOPS, 32.28 MiB/s [2024-12-13T08:37:19.036Z] 8491.93 IOPS, 33.17 MiB/s [2024-12-13T08:37:19.036Z] 8665.67 IOPS, 33.85 MiB/s 00:26:06.670 Latency(us) 00:26:06.670 [2024-12-13T08:37:19.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.670 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:06.670 Verification LBA range: start 0x0 length 0x4000 00:26:06.670 Nvme1n1 : 15.01 8669.57 33.87 11293.25 0.00 6392.25 495.42 23717.79 00:26:06.670 [2024-12-13T08:37:19.036Z] =================================================================================================================== 00:26:06.670 [2024-12-13T08:37:19.036Z] Total : 8669.57 33.87 11293.25 0.00 6392.25 495.42 23717.79 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:06.929 rmmod nvme_tcp 00:26:06.929 rmmod nvme_fabrics 00:26:06.929 rmmod nvme_keyring 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3475318 ']' 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3475318 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3475318 ']' 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3475318 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3475318 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3475318' 00:26:06.929 killing process with pid 3475318 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3475318 00:26:06.929 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3475318 00:26:07.188 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:07.188 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:07.188 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:07.188 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:07.188 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:07.188 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:07.188 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:07.188 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:07.188 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:07.188 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.188 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.188 09:37:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:09.721 00:26:09.721 real 0m25.383s 00:26:09.721 user 1m0.081s 00:26:09.721 sys 0m6.263s 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.721 ************************************ 00:26:09.721 END TEST nvmf_bdevperf 00:26:09.721 ************************************ 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.721 ************************************ 00:26:09.721 START TEST nvmf_target_disconnect 00:26:09.721 ************************************ 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:09.721 * Looking for test storage... 00:26:09.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:09.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.721 --rc genhtml_branch_coverage=1 00:26:09.721 --rc genhtml_function_coverage=1 00:26:09.721 --rc genhtml_legend=1 00:26:09.721 --rc geninfo_all_blocks=1 00:26:09.721 --rc geninfo_unexecuted_blocks=1 00:26:09.721 00:26:09.721 ' 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:09.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.721 --rc genhtml_branch_coverage=1 00:26:09.721 --rc genhtml_function_coverage=1 00:26:09.721 --rc genhtml_legend=1 00:26:09.721 --rc geninfo_all_blocks=1 00:26:09.721 --rc geninfo_unexecuted_blocks=1 00:26:09.721 00:26:09.721 ' 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:09.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.721 --rc genhtml_branch_coverage=1 00:26:09.721 --rc genhtml_function_coverage=1 00:26:09.721 --rc genhtml_legend=1 00:26:09.721 --rc geninfo_all_blocks=1 00:26:09.721 --rc geninfo_unexecuted_blocks=1 00:26:09.721 00:26:09.721 ' 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:09.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.721 --rc genhtml_branch_coverage=1 00:26:09.721 --rc genhtml_function_coverage=1 00:26:09.721 --rc genhtml_legend=1 00:26:09.721 --rc geninfo_all_blocks=1 00:26:09.721 --rc geninfo_unexecuted_blocks=1 00:26:09.721 00:26:09.721 ' 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.721 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:09.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:09.722 09:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:14.988 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:14.988 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.988 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:14.988 Found net devices under 0000:af:00.0: cvl_0_0 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:14.989 Found net devices under 0000:af:00.1: cvl_0_1 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:14.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:26:14.989 00:26:14.989 --- 10.0.0.2 ping statistics --- 00:26:14.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.989 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:14.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:26:14.989 00:26:14.989 --- 10.0.0.1 ping statistics --- 00:26:14.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.989 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:14.989 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:15.247 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:15.247 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:15.247 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.247 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:15.247 ************************************ 00:26:15.247 START TEST nvmf_target_disconnect_tc1 00:26:15.247 ************************************ 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:15.248 [2024-12-13 09:37:27.500790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.248 [2024-12-13 09:37:27.500899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef30b0 with addr=10.0.0.2, port=4420 00:26:15.248 [2024-12-13 09:37:27.500948] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:15.248 [2024-12-13 09:37:27.500978] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:15.248 [2024-12-13 09:37:27.500998] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:26:15.248 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:15.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:15.248 Initializing NVMe Controllers 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:15.248 00:26:15.248 real 0m0.113s 00:26:15.248 user 0m0.055s 00:26:15.248 sys 0m0.057s 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:15.248 ************************************ 00:26:15.248 END TEST nvmf_target_disconnect_tc1 00:26:15.248 ************************************ 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:15.248 ************************************ 00:26:15.248 START TEST nvmf_target_disconnect_tc2 00:26:15.248 ************************************ 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3480400 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3480400 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3480400 ']' 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:15.248 09:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:15.506 [2024-12-13 09:37:27.633432] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:26:15.506 [2024-12-13 09:37:27.633479] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.506 [2024-12-13 09:37:27.713815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:15.506 [2024-12-13 09:37:27.754786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.506 [2024-12-13 09:37:27.754823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.506 [2024-12-13 09:37:27.754830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.506 [2024-12-13 09:37:27.754836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.506 [2024-12-13 09:37:27.754841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.506 [2024-12-13 09:37:27.756368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:15.506 [2024-12-13 09:37:27.756496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:15.506 [2024-12-13 09:37:27.756603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:15.506 [2024-12-13 09:37:27.756603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:16.440 Malloc0 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:16.440 [2024-12-13 09:37:28.532064] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:16.440 [2024-12-13 09:37:28.557015] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3480640 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:16.440 09:37:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:18.356 09:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3480400 00:26:18.356 09:37:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 [2024-12-13 09:37:30.584187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Read completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.356 Write completed with error (sct=0, sc=8) 00:26:18.356 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 [2024-12-13 09:37:30.584415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 [2024-12-13 09:37:30.584621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Read completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 Write completed with error (sct=0, sc=8) 00:26:18.357 starting I/O failed 00:26:18.357 [2024-12-13 09:37:30.584824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:18.357 [2024-12-13 09:37:30.585022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.585072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.585354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.585390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.585616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.585629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.585721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.585732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.585814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.585826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.585959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.585975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.586074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.586085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.586232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.586244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.586361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.586394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.586561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.586596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.586736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.586768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.586901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.586934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.587060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.587092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.587292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.587325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.587511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.587524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.587672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.587684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.357 [2024-12-13 09:37:30.587769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.357 [2024-12-13 09:37:30.587795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.357 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.587921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.587954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.588235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.588268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.588485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.588522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.588732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.588765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.588962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.588995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.589192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.589224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.589414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.589457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.589585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.589618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.589808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.589841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.589965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.589997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.590135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.590169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.590370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.590404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.590597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.590631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.590820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.590852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.590985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.591019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.591133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.591172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.591314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.591348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.591549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.591584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.591770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.591803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.592067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.592101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.592294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.592327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.592518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.592536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.592620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.592636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.592730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.592746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.592907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.592925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.593030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.593047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.593205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.593223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.593393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.593411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.593584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.593602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.593692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.593708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.593815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.593832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.593907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.593923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.594017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.594050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.594202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.594217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.594426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.594438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.594600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.594612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.594692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.594703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.594783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.594793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.358 [2024-12-13 09:37:30.594873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.358 [2024-12-13 09:37:30.594883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.358 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.594945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.594955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.595089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.595100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.595182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.595193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.595332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.595347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.595438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.595452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.595601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.595611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.595741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.595753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.595832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.595843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.595906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.595917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.596054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.596065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.596131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.596141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.596216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.596227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.596357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.596368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.596575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.596588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.596717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.596750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.596932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.596964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.597145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.597178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.597272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.597282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.597495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.597530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.597726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.597758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.597878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.597911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.598047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.598080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.598256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.598288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.598459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.598471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.598603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.598614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.598764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.598776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.598857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.598888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.599029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.599061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.599245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.599278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.599383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.599429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.599576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.599588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.599758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.599791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.599979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.600012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.600131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.600164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.600347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.600379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.600559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.600593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.600769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.600803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.600914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.359 [2024-12-13 09:37:30.600946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.359 qpair failed and we were unable to recover it. 00:26:18.359 [2024-12-13 09:37:30.601155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.601187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.601363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.601396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.601609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.601643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.601835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.601868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.602004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.602036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.602164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.602178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.602245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.602255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.602407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.602419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.602589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.602601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.602736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.602747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.602879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.602890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.602967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.602977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.603067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.603077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.603226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.603238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.603297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.603308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.603435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.603446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.603605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.603617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.603690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.603701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.603849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.603882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.604062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.604094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.604203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.604236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.604355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.604388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.604576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.604588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.604662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.604673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.604816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.604827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.604899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.604910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.605046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.605079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.605322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.605355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.605481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.605514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.605635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.605647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.605847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.605859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.605941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.605952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.606029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.606039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.606234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.606245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.360 [2024-12-13 09:37:30.606421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.360 [2024-12-13 09:37:30.606433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.360 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.606515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.606525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.606654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.606665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.606813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.606825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.606956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.606968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.607046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.607057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.607151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.607182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.607416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.607460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.607726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.607759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.607889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.607921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.608035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.608068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.608279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.608317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.608540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.608576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.608796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.608807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.608953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.608986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.609191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.609224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.609443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.609486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.609746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.609758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.609978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.610000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.610084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.610095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.610326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.610338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.610421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.610431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.610664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.610698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.610836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.610868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.611054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.611086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.611216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.611250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.611462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.611474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.611631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.611643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.611801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.611834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.612024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.612058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.612166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.612198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.612440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.612502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.612696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.612730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.612997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.613028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.613143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.613175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.613369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.613402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.361 [2024-12-13 09:37:30.613600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.361 [2024-12-13 09:37:30.613613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.361 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.613829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.613841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.614076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.614108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.614239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.614272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.614515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.614549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.614724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.614735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.614893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.614925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.615058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.615091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.615210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.615242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.615431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.615475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.615606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.615618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.615789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.615821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.615942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.615974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.616166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.616200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.616504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.616539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.616807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.616846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.617115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.617148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.617268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.617300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.617501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.617513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.617739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.617772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.617976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.618009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.618131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.618163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.618399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.618411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.618548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.618561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.618729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.618761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.618880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.618913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.619153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.619185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.619370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.619382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.619521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.619534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.619701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.619717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.619811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.619845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.620065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.620098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.620279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.620311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.620423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.620435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.620529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.620540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.620633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.620643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.620790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.620802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.621020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.621051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.621176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.621209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.362 [2024-12-13 09:37:30.621404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.362 [2024-12-13 09:37:30.621415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.362 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.621538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.621551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.621632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.621642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.621796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.621810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.621908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.621919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.621984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.621995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.622137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.622149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.622282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.622294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.622437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.622482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.622622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.622654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.622784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.622816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.623012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.623044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.623179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.623212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.623437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.623482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.623671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.623683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.623767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.623777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.623845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.623857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.624007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.624017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.624226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.624238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.624380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.624391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.624540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.624574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.624768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.624799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.624999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.625032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.625275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.625307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.625432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.625476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.625664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.625677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.625833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.625864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.626041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.626073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.626264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.626296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.626499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.626511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.626669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.626702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.626820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.626851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.627038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.627070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.627267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.627279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.627456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.627489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.627664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.627696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.627835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.627867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.628061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.628094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.628353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.628365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.363 [2024-12-13 09:37:30.628505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.363 [2024-12-13 09:37:30.628517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.363 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.628711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.628723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.628806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.628816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.628966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.628998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.629285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.629322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.629510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.629545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.629785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.629817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.629936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.629968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.630162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.630196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.630417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.630459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.630649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.630682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.630859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.630892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.631083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.631114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.631284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.631296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.631535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.631569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.631701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.631733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.631935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.631967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.632179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.632212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.632418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.632430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.632610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.632646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.632781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.632815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.633007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.633039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.633311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.633343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.633539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.633575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.633824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.633856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.634097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.634129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.634304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.634337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.634465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.634498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.634738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.634749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.634953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.634985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.635117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.635149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.635399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.635432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.635695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.635707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.635869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.635881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.636076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.636087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.636256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.636268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.636342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.636353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.636433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.636444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.636546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.636556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.364 qpair failed and we were unable to recover it. 00:26:18.364 [2024-12-13 09:37:30.636630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.364 [2024-12-13 09:37:30.636641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.636794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.636824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.636940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.636972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.637168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.637200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.637314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.637347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.637525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.637539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.637704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.637736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.637932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.637965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.638161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.638193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.638302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.638315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.638381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.638392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.638541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.638553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.638688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.638700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.638837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.638868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.639119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.639152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.639401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.639433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.639733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.639746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.639874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.639886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.640048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.640080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.640216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.640249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.640429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.640481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.640618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.640630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.640778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.640790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.640985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.641007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.641149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.641161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.641234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.641245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.641481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.641516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.641626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.641659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.641860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.641891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.642147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.642179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.642308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.642340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.642580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.642612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.642857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.642869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.643037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.643048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.365 qpair failed and we were unable to recover it. 00:26:18.365 [2024-12-13 09:37:30.643251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.365 [2024-12-13 09:37:30.643284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.643473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.643506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.643640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.643684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.643761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.643772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.643997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.644028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.644210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.644243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.644425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.644478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.644670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.644681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.644827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.644839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.645103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.645114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.645194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.645205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.645362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.645400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.645602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.645636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.645770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.645802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.646017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.646049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.646173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.646206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.646323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.646355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.646531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.646566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.646772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.646804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.647064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.647096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.647386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.647418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.647693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.647726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.647917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.647949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.648080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.648112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.648234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.648267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.648443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.648498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.648636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.648669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.648847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.648859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.649011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.649044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.649239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.649272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.649385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.649418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.649729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.649768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.650010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.650057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.650235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.650255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.650411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.650428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.650621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.650639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.650803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.650835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.651046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.651079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.651340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.651374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.366 [2024-12-13 09:37:30.651546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.366 [2024-12-13 09:37:30.651564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.366 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.651704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.651721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.651898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.651915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.652098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.652132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.652385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.652419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.652630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.652682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.652778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.652795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.653043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.653076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.653363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.653396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.653538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.653557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.653635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.653647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.653808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.653820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.654048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.654085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.654326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.654358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.654580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.654615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.654777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.654789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.655004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.655037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.655276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.655308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.655518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.655531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.655595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.655606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.655747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.655758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.655830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.655840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.655981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.655992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.656156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.656168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.656239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.656250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.656321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.656331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.656467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.656478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.656623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.656634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.656860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.656892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.657009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.657041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.657251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.657295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.657387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.657398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.657469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.657480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.657572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.657602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.657787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.657821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.658000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.658032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.658152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.658185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.658410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.658442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.658668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.367 [2024-12-13 09:37:30.658703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.367 qpair failed and we were unable to recover it. 00:26:18.367 [2024-12-13 09:37:30.658936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.658948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.659154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.659166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.659394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.659426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.659564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.659597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.659817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.659850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.659974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.660006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.660214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.660246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.660366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.660377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.660571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.660584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.660778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.660789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.660922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.660934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.661059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.661071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.661208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.661241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.661425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.661472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.661771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.661803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.661992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.662024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.662308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.662341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.662598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.662632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.662767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.662799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.663011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.663044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.663214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.663245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.663459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.663493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.663672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.663705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.663922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.663934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.664128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.664140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.664215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.664225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.664472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.664507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.664697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.664730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.664873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.664905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.665174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.665207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.665428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.665472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.665646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.665679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.665913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.665924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.666078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.666111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.666294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.666327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.666520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.666554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.666744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.666777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.666955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.666987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.667202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.368 [2024-12-13 09:37:30.667234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.368 qpair failed and we were unable to recover it. 00:26:18.368 [2024-12-13 09:37:30.667413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.667425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.667596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.667670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.667825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.667861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.667999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.668032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.668302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.668335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.668491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.668510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.668650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.668682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.668790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.668823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.669016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.669050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.669256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.669288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.669413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.669446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.669706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.669724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.669811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.669827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.669983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.670029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.670224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.670264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.670401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.670435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.670638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.670656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.670731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.670747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.670821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.670837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.670918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.670934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.671091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.671109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.671252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.671270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.671493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.671507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.671573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.671583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.671652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.671663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.671799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.671810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.671977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.671989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.672140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.672172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.672285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.672318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.672428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.672485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.672669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.672702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.672840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.672872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.673001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.673033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.673276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.673307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.369 qpair failed and we were unable to recover it. 00:26:18.369 [2024-12-13 09:37:30.673486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.369 [2024-12-13 09:37:30.673498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.673575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.673586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.673788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.673800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.673994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.674006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.674170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.674182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.674392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.674424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.674623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.674656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.674853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.674894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.675097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.675131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.675349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.675381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.675583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.675618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.675748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.675766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.675992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.676025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.676154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.676186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.676461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.676496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.676670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.676688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.676838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.676869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.676998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.677031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.677228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.677260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.677462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.677497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.677694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.677728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.677989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.678023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.678205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.678239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.678367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.678384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.678537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.678556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.678642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.678657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.678748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.678763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.678863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.678878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.679018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.679031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.679230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.679242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.679315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.679325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.679566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.679579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.679672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.679682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.679848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.679880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.680089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.680128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.680249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.680281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.680414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.680458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.680588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.680621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.680761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.680800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.370 qpair failed and we were unable to recover it. 00:26:18.370 [2024-12-13 09:37:30.680868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.370 [2024-12-13 09:37:30.680879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.680956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.680967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.681612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.681635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.681782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.681795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.681998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.682010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.682234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.682246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.682385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.682397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.682618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.682631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.682768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.682781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.682873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.682884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.682965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.682976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.683139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.683150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.683218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.683228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.683362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.683373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.683462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.683474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.683615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.683627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.683716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.683726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.683884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.683896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.684033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.684045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.684112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.684122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.684247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.684259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.684398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.684409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.684547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.684560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.684623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.684633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.684720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.684730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.684802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.684813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.684988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.685000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.685074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.685085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.685164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.685175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.685397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.685408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.685484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.685495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.685707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.685718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.685797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.685807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.685983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.685994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.686136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.686148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.686217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.686234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.686327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.686338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.686414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.686424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.686587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.686599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.371 qpair failed and we were unable to recover it. 00:26:18.371 [2024-12-13 09:37:30.686674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.371 [2024-12-13 09:37:30.686685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.686828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.686840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.686909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.686919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.687120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.687132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.687262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.687275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.687398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.687410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.687536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.687549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.687690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.687702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.687784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.687794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.687924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.687936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.688031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.688042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.688105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.688116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.688190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.688201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.688281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.688292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.688423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.688435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.688531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.688542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.688606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.688617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.688763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.688775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.688913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.688925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.689066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.689078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.689157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.689168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.689230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.689240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.689302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.689312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.689404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.689415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.689490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.689502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.689565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.689575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.689638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.689649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.689716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.689727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.689948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.689960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.690038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.690049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.690118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.690130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.690205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.690215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.690350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.690361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.690423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.690434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.690501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.690511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.690585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.690596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.690678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.690691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.690821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.690832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.690907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.690918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.372 [2024-12-13 09:37:30.691052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.372 [2024-12-13 09:37:30.691063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.372 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.691151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.691163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.691263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.691275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.691348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.691359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.691433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.691444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.691593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.691605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.691757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.691769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.691843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.691855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.691924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.691936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.692004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.692015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.692088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.692100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.692163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.692173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.692258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.692268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.692408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.692419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.692556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.692569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.692704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.692716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.692847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.692859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.692994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.693006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.693132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.693143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.693271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.693283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.693458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.693470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.693664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.693676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.693756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.693766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.693841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.693853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.693998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.694010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.694091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.694104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.694235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.694247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.694411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.694423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.694517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.694530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.694662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.694674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.694749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.694761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.694850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.694862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.695010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.695021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.695163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.695175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.695251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.695262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.695412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.695424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.695512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.695524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.695604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.695618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.695690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.695700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.695767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.373 [2024-12-13 09:37:30.695778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.373 qpair failed and we were unable to recover it. 00:26:18.373 [2024-12-13 09:37:30.695860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.695872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.695952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.695964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.696030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.696041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.696167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.696179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.696308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.696319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.696377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.696388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.696471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.696482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.696589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.696600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.696759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.696771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.696833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.696843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.696911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.696921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.697051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.697063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.697165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.697177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.697252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.697264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.697462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.697474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.697711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.697723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.697779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.697790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.697935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.697946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.698026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.698038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.698204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.698216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.698284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.698294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.698348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.698358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.698503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.698515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.698590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.698600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.698752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.698764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.698901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.698913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.698999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.699011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.699075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.699086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.699158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.699169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.699257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.699269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.699355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.699367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.699524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.699538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.699602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.699613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.699763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.374 [2024-12-13 09:37:30.699775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.374 qpair failed and we were unable to recover it. 00:26:18.374 [2024-12-13 09:37:30.699908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.699920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.699981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.699991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.700153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.700164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.700322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.700335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.700403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.700414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.700489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.700502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.700667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.700679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.700824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.700836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.700906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.700916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.701015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.701027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.701088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.701100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.701182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.701195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.701365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.701377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.701462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.701475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.701609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.701620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.701690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.701701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.701762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.701775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.701871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.701883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.701946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.701958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.702031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.702043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.702125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.702137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.702207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.702219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.702282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.702295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.702422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.702434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.702569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.702581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.702644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.702657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.702751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.702763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.702835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.702848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.702921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.702933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.703012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.703025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.703132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.703176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.703330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.703350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.703520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.703539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.703770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.703787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.703864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.703881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.703976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.703992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.704190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.704203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.704285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.704297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.704432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.704444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.375 qpair failed and we were unable to recover it. 00:26:18.375 [2024-12-13 09:37:30.704607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.375 [2024-12-13 09:37:30.704620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.704780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.704792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.704935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.704948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.705029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.705042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.705172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.705187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.705321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.705333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.705414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.705426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.705599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.705611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.705683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.705695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.705921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.705933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.706005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.706017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.706089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.706101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.706234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.706247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.706377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.706389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.706464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.706477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.706558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.706570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.706649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.706661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.706788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.706801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.706871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.706884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.706967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.706980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.707212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.707225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.707300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.707312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.707371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.707382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.707522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.707536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.707814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.707848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.708056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.708089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.708212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.708244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.708417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.708458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.708534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.708546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.708718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.708730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.708925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.708937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.709094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.709115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.709290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.709308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.709445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.709469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.709554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.709571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.709645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.709662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.709878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.709910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.710037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.710070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.710185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.710217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.376 qpair failed and we were unable to recover it. 00:26:18.376 [2024-12-13 09:37:30.710418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.376 [2024-12-13 09:37:30.710457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.377 qpair failed and we were unable to recover it. 00:26:18.377 [2024-12-13 09:37:30.710635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.377 [2024-12-13 09:37:30.710668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.377 qpair failed and we were unable to recover it. 00:26:18.377 [2024-12-13 09:37:30.710842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.377 [2024-12-13 09:37:30.710874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.377 qpair failed and we were unable to recover it. 00:26:18.659 [2024-12-13 09:37:30.711157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.659 [2024-12-13 09:37:30.711190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.659 qpair failed and we were unable to recover it. 00:26:18.659 [2024-12-13 09:37:30.711312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.659 [2024-12-13 09:37:30.711344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.659 qpair failed and we were unable to recover it. 00:26:18.659 [2024-12-13 09:37:30.711559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.659 [2024-12-13 09:37:30.711608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.659 qpair failed and we were unable to recover it. 00:26:18.659 [2024-12-13 09:37:30.711816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.659 [2024-12-13 09:37:30.711852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.659 qpair failed and we were unable to recover it. 00:26:18.659 [2024-12-13 09:37:30.712079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.659 [2024-12-13 09:37:30.712096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.659 qpair failed and we were unable to recover it. 00:26:18.659 [2024-12-13 09:37:30.712304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.659 [2024-12-13 09:37:30.712322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.659 qpair failed and we were unable to recover it. 00:26:18.659 [2024-12-13 09:37:30.712409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.659 [2024-12-13 09:37:30.712426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.659 qpair failed and we were unable to recover it. 00:26:18.659 [2024-12-13 09:37:30.712525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.659 [2024-12-13 09:37:30.712542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.659 qpair failed and we were unable to recover it. 00:26:18.659 [2024-12-13 09:37:30.712693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.659 [2024-12-13 09:37:30.712710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.659 qpair failed and we were unable to recover it. 00:26:18.659 [2024-12-13 09:37:30.712866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.659 [2024-12-13 09:37:30.712883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.659 qpair failed and we were unable to recover it. 00:26:18.659 [2024-12-13 09:37:30.712968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.659 [2024-12-13 09:37:30.712985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.659 qpair failed and we were unable to recover it. 00:26:18.659 [2024-12-13 09:37:30.713129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.659 [2024-12-13 09:37:30.713147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.659 qpair failed and we were unable to recover it. 00:26:18.659 [2024-12-13 09:37:30.713320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.659 [2024-12-13 09:37:30.713336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.659 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.713411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.713425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.713516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.713529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.713666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.713678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.713827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.713839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.713926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.713938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.714153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.714164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.714234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.714246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.714374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.714386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.714464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.714475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.714634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.714677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.714852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.714885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.715077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.715110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.715232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.715264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.715441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.715484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.715615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.715648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.715837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.715869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.716096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.716169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.716404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.716441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.716680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.716698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.716786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.716803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.716960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.716977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.717217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.717250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.717516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.717551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.717694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.717726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.717848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.717881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.718070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.718104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.718394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.718428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.718559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.718592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.718717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.718750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.718978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.718999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.719142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.719159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.719238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.719251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.719409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.719421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.719550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.719562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.719630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.719642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.719732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.719764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.720067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.720100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.720239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.660 [2024-12-13 09:37:30.720271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.660 qpair failed and we were unable to recover it. 00:26:18.660 [2024-12-13 09:37:30.720403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.720435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.720639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.720672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.720790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.720822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.720921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.720954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.721073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.721105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.721236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.721269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.721511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.721549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.721825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.721858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.722061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.722094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.722285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.722317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.722497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.722510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.722654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.722687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.722807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.722839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.723026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.723058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.723249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.723282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.723513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.723548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.723670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.723683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.723759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.723769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.724048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.724132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.724416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.724461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.724582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.724599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.724699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.724717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.724899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.724932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.725118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.725151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.725356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.725389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.725585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.725620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.725740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.725772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.726025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.726043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.726182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.726201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.726411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.726443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.726632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.726665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.726787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.726827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.726996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.727014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.727182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.727215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.727352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.727386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.727584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.727618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.727751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.727768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.727997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.728030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.661 [2024-12-13 09:37:30.728222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.661 [2024-12-13 09:37:30.728255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.661 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.728446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.728492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.728631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.728649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.728886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.728920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.729120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.729153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.729272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.729305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.729495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.729530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.729783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.729817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.729999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.730032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.730223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.730257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.730431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.730481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.730619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.730636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.730815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.730847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.730966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.730999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.731141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.731174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.731442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.731484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.731630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.731663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.731867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.731899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.732078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.732111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.732300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.732333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.732463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.732484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.732645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.732662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.732749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.732765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.732860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.732877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.732978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.732995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.733079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.733095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.733314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.733332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.733476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.733494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.733701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.733719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.733937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.733969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.734072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.734104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.734373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.734405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.734543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.734577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.734791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.734823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.734955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.734987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.735109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.735155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.735298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.735316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.735504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.735538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.735712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.735745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.662 [2024-12-13 09:37:30.735874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.662 [2024-12-13 09:37:30.735907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.662 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.736086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.736118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.736309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.736342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.736559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.736593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.736772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.736804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.736913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.736944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.737194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.737211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.737355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.737372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.737594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.737612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.737712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.737728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.737882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.737899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.737988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.738031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.738210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.738244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.738429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.738470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.738662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.738696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.738828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.738860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.738972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.739006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.739132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.739168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.739315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.739332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.739577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.739612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.739724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.739756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.739936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.739975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.740171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.740203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.740380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.740413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.740606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.740640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.740752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.740784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.740900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.740933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.741110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.741143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.741344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.741378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.741501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.741535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.741801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.741841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.741982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.741999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.742165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.742199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.742382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.742414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.742610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.742644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.742828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.742862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.743073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.743106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.743293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.743325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.743568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.743602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.663 [2024-12-13 09:37:30.743843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.663 [2024-12-13 09:37:30.743875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.663 qpair failed and we were unable to recover it. 00:26:18.664 [2024-12-13 09:37:30.744070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.664 [2024-12-13 09:37:30.744088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.664 qpair failed and we were unable to recover it. 00:26:18.664 [2024-12-13 09:37:30.744247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.664 [2024-12-13 09:37:30.744264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.664 qpair failed and we were unable to recover it. 00:26:18.664 [2024-12-13 09:37:30.744356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.664 [2024-12-13 09:37:30.744373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.664 qpair failed and we were unable to recover it. 00:26:18.664 [2024-12-13 09:37:30.744515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.664 [2024-12-13 09:37:30.744553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.664 qpair failed and we were unable to recover it. 00:26:18.664 [2024-12-13 09:37:30.744679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.664 [2024-12-13 09:37:30.744712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.664 qpair failed and we were unable to recover it. 00:26:18.664 [2024-12-13 09:37:30.744923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.664 [2024-12-13 09:37:30.744956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.664 qpair failed and we were unable to recover it. 00:26:18.664 [2024-12-13 09:37:30.745078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.664 [2024-12-13 09:37:30.745111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.664 qpair failed and we were unable to recover it. 00:26:18.664 [2024-12-13 09:37:30.745378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.664 [2024-12-13 09:37:30.745411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.664 qpair failed and we were unable to recover it. 00:26:18.664 [2024-12-13 09:37:30.745555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.664 [2024-12-13 09:37:30.745590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.664 qpair failed and we were unable to recover it. 00:26:18.664 [2024-12-13 09:37:30.745850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.664 [2024-12-13 09:37:30.745868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.664 qpair failed and we were unable to recover it. 00:26:18.664 [2024-12-13 09:37:30.746115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.664 [2024-12-13 09:37:30.746133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.664 qpair failed and we were unable to recover it. 00:26:18.664 [2024-12-13 09:37:30.746227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.664 [2024-12-13 09:37:30.746243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.664 qpair failed and we were unable to recover it. 00:26:18.664 [2024-12-13 09:37:30.746322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.664 [2024-12-13 09:37:30.746337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.664 qpair failed and we were unable to recover it. 00:26:18.664 [2024-12-13 09:37:30.746432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.664 [2024-12-13 09:37:30.746454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.664 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.746542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.746559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.746724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.746742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.746911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.746945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.747140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.747173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.747441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.747497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.747675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.747716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.747807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.747823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.748010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.748049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.748175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.748209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.748474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.748510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.748717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.748750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.748923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.748956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.749167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.749200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.749460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.749494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.749631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.749648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.749794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.749831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.749953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.749987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.750119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.750153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.750326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.750359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.750492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.750527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.750720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.750754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.750937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.750955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.751111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.751129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.751354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.751371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.751467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.751483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.751653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.751686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.751878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.751911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.752116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.752149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.752339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.752371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.752554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.752573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.752758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.752791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.752970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.753003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.753200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.753233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.753428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.753469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.753656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.753675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.753831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.753865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.753971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.754004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.754121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.665 [2024-12-13 09:37:30.754155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.665 qpair failed and we were unable to recover it. 00:26:18.665 [2024-12-13 09:37:30.754407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.754440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.754710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.754743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.754922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.754939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.755032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.755047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.755146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.755162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.755399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.755434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.755665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.755699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.755901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.755941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.756034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.756050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.756220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.756259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.756386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.756420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.756717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.756774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.757032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.757046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.757105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.757116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.757242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.757253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.757392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.757405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.757534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.757548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.757770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.757782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.757867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.757877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.758015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.758048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.758246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.758278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.758466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.758501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.758675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.758708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.758867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.758879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.759032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.759064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.759267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.759299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.759519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.759555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.759767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.759800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.760089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.760122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.760393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.760427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.760645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.760679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.760813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.760846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.761117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.761129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.761279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.761290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.761362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.761372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.761460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.761471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.761735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.761747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.761931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.761963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.762099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.666 [2024-12-13 09:37:30.762134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.666 qpair failed and we were unable to recover it. 00:26:18.666 [2024-12-13 09:37:30.762398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.762430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.762622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.762635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.762714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.762725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.762829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.762862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.763121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.763155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.763332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.763365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.763493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.763528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.763770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.763803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.763980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.764014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.764207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.764219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.764361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.764375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.764531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.764543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.764772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.764804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.764980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.765013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.765134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.765168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.765291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.765325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.765469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.765504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.765783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.765815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.765980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.765992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.766162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.766194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.766370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.766402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.766659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.766694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.766947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.766960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.767115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.767128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.767274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.767308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.767443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.767498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.767608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.767641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.767764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.767797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.767919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.767931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.768025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.768036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.768175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.768209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.768331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.768363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.768609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.768644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.768752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.768764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.768834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.768845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.768925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.768954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.769198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.769232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.769359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.769391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.769507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.667 [2024-12-13 09:37:30.769540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.667 qpair failed and we were unable to recover it. 00:26:18.667 [2024-12-13 09:37:30.769683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.769695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.769827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.769839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.770028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.770061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.770240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.770273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.770379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.770411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.770619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.770631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.770796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.770830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.771071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.771104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.771206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.771239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.771367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.771399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.771628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.771641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.771801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.771816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.771914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.771945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.772130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.772163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.772297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.772331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.772517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.772552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.772820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.772852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.773071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.773105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.773221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.773254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.773563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.773596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.773714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.773747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.773921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.773961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.774105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.774117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.774169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.774180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.774339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.774351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.774526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.774561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.774707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.774739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.774855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.774887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.775067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.775100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.775232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.775265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.775537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.775570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.775768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.775800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.775984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.776017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.776210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.776243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.776490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.776524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.776699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.776732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.776915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.776962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.777051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.777063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.777151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.668 [2024-12-13 09:37:30.777162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.668 qpair failed and we were unable to recover it. 00:26:18.668 [2024-12-13 09:37:30.777268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.777302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.777543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.777578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.777768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.777802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.777914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.777947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.778126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.778158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.778409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.778442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.778644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.778677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.778865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.778899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.779021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.779055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.779241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.779253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.779393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.779426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.779651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.779686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.779886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.779925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.780034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.780046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.780197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.780228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.780497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.780544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.780629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.780639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.780779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.780791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.780986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.780997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.781128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.781140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.781213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.781224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.781316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.781348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.781619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.781653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.781895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.781927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.782055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.782089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.782297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.782330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.782538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.782573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.782766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.782798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.782989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.783022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.783210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.783243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.783419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.783459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.783606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.783638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.783819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.783852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.784041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.669 [2024-12-13 09:37:30.784075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.669 qpair failed and we were unable to recover it. 00:26:18.669 [2024-12-13 09:37:30.784263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.784296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.784486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.784522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.784630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.784662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.784931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.784963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.785221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.785244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.785445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.785460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.785565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.785576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.785717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.785729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.785873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.785906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.786120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.786153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.786280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.786313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.786457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.786490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.786619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.786652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.786821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.786832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.786981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.787014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.787187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.787220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.787343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.787376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.787620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.787655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.787777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.787820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.787995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.788028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.788225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.788258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.788539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.788582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.788780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.788792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.788899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.788931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.789059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.789091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.789333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.789366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.789487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.789522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.789698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.789731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.789905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.789917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.790008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.790019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.790140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.790173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.790308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.790341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.790485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.790520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.790711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.790744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.790999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.791012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.791092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.791102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.791328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.791361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.791484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.791519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.791637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.670 [2024-12-13 09:37:30.791669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.670 qpair failed and we were unable to recover it. 00:26:18.670 [2024-12-13 09:37:30.791864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.791897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.792111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.792123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.792334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.792366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.792572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.792606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.792719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.792751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.792955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.792988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.793241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.793253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.793392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.793404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.793541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.793572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.793703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.793736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.793939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.793971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.794108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.794139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.794410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.794422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.794569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.794582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.794712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.794743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.794919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.794952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.795202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.795235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.795509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.795543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.795776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.795809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.796016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.796055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.796322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.796355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.796643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.796677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.796944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.796977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.797215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.797248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.797511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.797546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.797702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.797714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.797891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.797923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.798185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.798217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.798408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.798442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.798706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.798740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.798949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.798982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.799169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.799201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.799469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.799504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.799795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.799828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.800012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.800043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.800291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.800324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.800506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.800540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.800731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.800764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.800952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-12-13 09:37:30.800984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.671 qpair failed and we were unable to recover it. 00:26:18.671 [2024-12-13 09:37:30.801226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.801258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.801509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.801543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.801805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.801816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.801889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.801900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.802101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.802113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.802203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.802213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.802352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.802383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.802541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.802575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.802848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.802882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.803157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.803169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.803379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.803411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.803562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.803597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.803732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.803764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.804022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.804034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.804260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.804272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.804432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.804445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.804651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.804664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.804791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.804803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.804931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.804943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.805083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.805095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.805265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.805304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.805595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.805629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.805767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.805800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.806023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.806034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.806249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.806261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.806352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.806362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.806501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.806514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.806757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.806769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.807005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.807017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.807162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.807175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.807236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.807246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.807419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.807474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.807716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.807749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.808011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.808045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.808178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.808212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.808388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.808420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.808549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.808584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.808782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.808814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.809006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-12-13 09:37:30.809042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.672 qpair failed and we were unable to recover it. 00:26:18.672 [2024-12-13 09:37:30.809201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.809213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.809457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.809469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.809657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.809690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.809882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.809915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.810153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.810187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.810428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.810471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.810662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.810696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.810964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.810997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.811285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.811297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.811519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.811532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.811734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.811767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.811953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.811986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.812195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.812227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.812471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.812507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.812709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.812742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.812917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.812950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.813159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.813191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.813433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.813475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.813728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.813761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.813936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.813969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.814190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.814203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.814268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.814281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.814363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.814373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.814518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.814553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.814677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.814710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.814842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.814875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.815050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.815082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.815278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.815311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.815553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.815587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.815823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.815835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.816071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.816083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.816289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.816321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.816528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.816563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.816823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.816856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.817100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-12-13 09:37:30.817112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.673 qpair failed and we were unable to recover it. 00:26:18.673 [2024-12-13 09:37:30.817357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.817370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.817539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.817574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.817698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.817739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.817875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.817887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.818030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.818064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.818321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.818353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.818532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.818569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.818717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.818729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.818888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.818920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.819193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.819225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.819429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.819489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.819700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.819734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.820002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.820035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.820329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.820362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.820631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.820666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.820943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.820975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.821225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.821258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.821532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.821567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.821834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.821867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.822060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.822092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.822325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.822337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.822542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.822554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.822719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.822731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.822900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.822912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.823077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.823110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.823293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.823326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.823564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.823604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.823832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.823844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.823992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.824003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.824155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.824188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.824425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.824468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.824653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.824687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.824931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.824965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.825212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.825224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.825372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.825384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.825545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.825579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.825795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.825828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.826005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.826037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.826223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.826236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.674 qpair failed and we were unable to recover it. 00:26:18.674 [2024-12-13 09:37:30.826471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.674 [2024-12-13 09:37:30.826506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.826647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.826681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.826974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.827006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.827294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.827327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.827531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.827566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.827829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.827861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.828090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.828102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.828288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.828321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.828528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.828565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.828750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.828763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.828983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.829017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.829214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.829249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.829528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.829561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.829774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.829809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.830060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.830094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.830365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.830377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.830567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.830602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.830850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.830862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.831016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.831048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.831242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.831276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.831489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.831523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.831760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.831793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.831995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.832030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.832144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.832155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.832317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.832328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.832570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.832582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.832902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.832935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.833138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.833181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.833474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.833510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.833657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.833689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.833881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.833893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.834099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.834132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.834374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.834406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.834610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.834644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.834766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.834797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.834972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.835005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.835261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.835273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.835508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.835521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.835664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.835677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.675 qpair failed and we were unable to recover it. 00:26:18.675 [2024-12-13 09:37:30.835846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.675 [2024-12-13 09:37:30.835858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.836102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.836114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.836364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.836377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.836579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.836591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.836836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.836849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.836994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.837005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.837171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.837184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.837385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.837418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.837709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.837742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.838011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.838023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.838103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.838114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.838261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.838274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.838480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.838517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.838776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.838809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.839097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.839122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.840089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.840119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.840356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.840368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.840481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.840516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.840782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.840817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.841093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.841106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.841252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.841264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.841433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.841446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.841555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.841588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.841773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.841807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.842051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.842084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.842329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.842362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.842627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.842663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.842869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.842903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.843089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.843122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.843215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.843227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.843373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.843387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.843588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.843602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.843823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.843835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.844116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.844150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.844439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.844482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.844683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.844716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.844911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.844924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.845078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.845110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.845335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.845368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.676 qpair failed and we were unable to recover it. 00:26:18.676 [2024-12-13 09:37:30.845556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-12-13 09:37:30.845590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.845712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.845745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.845964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.845976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.846179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.846191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.846320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.846332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.846413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.846425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.846513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.846525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.846739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.846751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.846891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.846904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.847083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.847117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.847376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.847409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.847634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.847670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.847917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.847950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.848193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.848206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.848369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.848382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.848604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.848617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.848868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.848907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.849124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.849156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.849335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.849370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.849508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.849542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.849735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.849772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.850024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.850056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.850168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.850202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.850398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.850431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.850654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.850687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.850886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.850919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.851048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.851061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.851223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.851236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.851461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.851495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.851724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.851758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.851977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.852010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.852256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.852270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.852353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.852364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.852515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.852527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.852674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-12-13 09:37:30.852686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.677 qpair failed and we were unable to recover it. 00:26:18.677 [2024-12-13 09:37:30.852849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.852883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.853170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.853204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.853391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.853425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.853572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.853607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.853873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.853906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.854117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.854151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.854398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.854434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.854581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.854615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.854816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.854850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.855053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.855089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.855280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.855293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.855591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.855628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.855904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.855939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.856255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.856268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.856487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.856502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.856638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.856650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.856851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.856865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.856944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.856956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.857043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.857055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.857210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.857225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.857365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.857378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.857591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.857607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.857696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.857707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.857906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.857919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.858018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.858029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.858180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.858209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.858392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.858426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.858636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.858671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.858792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.858834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.859061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.859075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.859766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.859790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.859950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.859986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.860252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.860288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.860475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.860510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.860639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.860676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.860880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.860915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.861044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.861077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.861283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-12-13 09:37:30.861320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.678 qpair failed and we were unable to recover it. 00:26:18.678 [2024-12-13 09:37:30.861549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.861584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.861797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.861836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.862100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.862115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.862276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.862288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.862457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.862471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.862632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.862664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.862939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.862975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.863249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.863283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.863593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.863629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.863892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.863904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.864002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.864012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.864166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.864205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.864415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.864466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.864663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.864696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.864892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.864926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.865061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.865074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.865207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.865219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.865297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.865308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.865562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.865608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.865764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.865785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.865935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.865953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.866145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.866158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.866247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.866257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.866386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.866399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.866544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.866557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.866706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.866718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.866815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.866825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.866901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.866912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.866991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.867003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.867150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.867161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.867309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.867322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.867460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.867474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.867555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.867584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.867680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.867692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.867765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.867776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.867851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.867863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.867954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.867965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.868053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.868066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.868143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-12-13 09:37:30.868155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.679 qpair failed and we were unable to recover it. 00:26:18.679 [2024-12-13 09:37:30.868225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.868237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.868321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.868335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.868409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.868420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.868628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.868640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.868723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.868735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.868826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.868837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.868901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.868912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.869058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.869069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.869147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.869160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.869301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.869312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.869398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.869411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.869585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.869600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.869688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.869703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.869784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.869795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.869871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.869882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.870015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.870026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.870106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.870116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.870322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.870336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.870537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.870551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.870697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.870711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.870856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.870869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.871097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.871111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.871193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.871205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.871364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.871377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.871508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.871524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.871612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.871624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.871710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.871721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.871864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.871876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.872034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.872046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.872114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.872126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.872194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.872205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.872365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.872378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.872514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.872527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.872606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.872617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.872683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.872695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.872839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.872851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.873010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.873024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.873094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.873107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.680 [2024-12-13 09:37:30.873182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.680 [2024-12-13 09:37:30.873194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.680 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.873288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.873301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.873406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.873418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.873570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.873584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.873661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.873674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.873756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.873767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.873922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.873935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.874002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.874014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.874108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.874120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.874202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.874214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.874291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.874303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.874386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.874398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.874490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.874503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.874638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.874652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.874787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.874801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.874876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.874887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.874985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.874999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.875060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.875072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.875148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.875160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.875240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.875252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.875326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.875337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.876076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.876101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.876271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.876285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.876427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.876440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.876522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.876534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.876678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.876690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.876822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.876838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.876983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.876996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.877142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.877155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.877287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.877298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.877428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.877441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.877608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.877645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.877823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.877856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.877976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.878008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.878123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.878135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.878287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.878298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.878433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.878446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.878618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.878630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.878697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.681 [2024-12-13 09:37:30.878709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.681 qpair failed and we were unable to recover it. 00:26:18.681 [2024-12-13 09:37:30.878835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.878847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.878914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.878925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.879121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.879133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.879280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.879293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.879426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.879438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.879586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.879600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.879745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.879758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.879910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.879922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.880002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.880014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.880179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.880191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.880322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.880333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.880406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.880416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.880556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.880569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.880643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.880654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.880722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.880733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.880820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.880831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.880916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.880927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.881016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.881028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.881090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.881101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.881182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.881192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.881255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.881266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.881338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.881349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.881520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.881533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.881600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.881612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.881756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.881768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.881850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.881862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.881993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.882005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.882185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.882199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.882281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.882291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.882361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.882372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.882434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.882445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.882582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.882595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.882741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.882754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.882827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.882838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.682 [2024-12-13 09:37:30.882897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.682 [2024-12-13 09:37:30.882908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.682 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.883049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.883061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.883133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.883144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.883204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.883214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.883278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.883290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.883368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.883379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.883454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.883466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.883539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.883550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.883618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.883629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.883706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.883717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.883813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.883824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.883971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.883983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.884047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.884059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.884191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.884203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.884271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.884282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.884424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.884436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.884553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.884567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.884799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.884811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.884943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.884955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.885192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.885203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.885444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.885462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.885740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.885751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.885901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.885913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.886172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.886205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.886341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.886375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.886681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.886716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.886849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.886882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.886998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.887031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.887355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.887367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.887498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.887511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.887705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.887719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.887818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.887829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.888034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.888046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.888178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.888192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.888333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.888345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.888478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.888491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.888648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.888660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.888834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.888845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.683 [2024-12-13 09:37:30.888993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.683 [2024-12-13 09:37:30.889004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.683 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.889248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.889260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.889401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.889413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.889562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.889576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.889742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.889755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.889917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.889949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.890193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.890226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.890427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.890474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.890601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.890633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.890820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.890852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.891001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.891013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.891109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.891121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.891259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.891271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.891488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.891500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.891740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.891752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.891895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.891909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.892000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.892013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.892280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.892313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.892435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.892542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.892688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.892721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.892914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.892926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.893111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.893122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.893269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.893281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.893412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.893424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.893631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.893643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.893888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.893900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.894051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.894062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.894238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.894250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.894334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.894347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.894611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.894623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.894700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.894710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.894860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.894872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.895116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.895132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.895349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.895361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.895461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.895473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.895566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.895579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.895677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.895689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.895885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.895898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.895997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.896010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.684 [2024-12-13 09:37:30.896093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.684 [2024-12-13 09:37:30.896126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.684 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.896258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.896289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.896490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.896526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.896714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.896748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.896929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.896961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.897163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.897195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.897373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.897407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.897671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.897705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.897851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.897883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.898087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.898120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.898305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.898317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.898512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.898524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.898619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.898631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.898728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.898740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.898830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.898842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.899056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.899068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.899157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.899168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.899400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.899412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.899637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.899650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.899787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.899799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.900029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.900041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.900198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.900210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.900290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.900301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.900499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.900511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.900643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.900655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.900867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.900878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.900958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.900969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.901203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.901215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.901312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.901323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.901417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.901427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.901590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.901603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.901748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.901760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.901939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.901952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.902212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.902223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.902433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.902444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.902673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.902685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.902902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.902917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.903062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.903074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.903166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.903178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.685 qpair failed and we were unable to recover it. 00:26:18.685 [2024-12-13 09:37:30.903316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.685 [2024-12-13 09:37:30.903328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.903466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.903478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.903717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.903729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.903802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.903813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.903877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.903887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.904163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.904176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.904278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.904290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.904441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.904459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.904651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.904663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.904811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.904823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.904952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.904965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.905135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.905148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.905312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.905324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.905469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.905481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.905576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.905588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.905814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.905847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.906044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.906078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.906245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.906257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.906335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.906346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.906486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.906499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.906592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.906603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.906696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.906708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.906791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.906802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.906970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.906982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.907311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.907349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.907524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.907545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.907778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.907796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.907902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.907919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.908069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.908087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.908346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.908365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.908501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.908520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.908724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.908742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.908844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.908861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.908950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.686 [2024-12-13 09:37:30.908967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.686 qpair failed and we were unable to recover it. 00:26:18.686 [2024-12-13 09:37:30.909214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.909233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.909310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.909326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.909490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.909505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.909654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.909668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.909737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.909748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.909898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.909925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.910134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.910169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.910297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.910330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.910511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.910546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.910812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.910845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.911038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.911072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.911340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.911352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.911507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.911519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.911672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.911684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.911842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.911853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.912000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.912012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.912098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.912108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.912285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.912297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.912440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.912466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.912622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.912635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.912795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.912807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.912875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.912886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.912956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.912967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.913137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.913148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.913342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.913354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.913497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.913509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.913658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.913670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.913748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.913759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.913858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.913869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.913956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.913966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.914135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.914150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.914312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.914324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.914489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.914501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.914601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.914612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.914703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.914714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.914849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.914862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.914950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.914961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.915245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.915257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.687 [2024-12-13 09:37:30.915505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.687 [2024-12-13 09:37:30.915517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.687 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.915658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.915670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.915774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.915786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.915876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.915887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.915980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.915991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.916085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.916095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.916258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.916271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.916431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.916443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.916557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.916570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.916726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.916738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.916836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.916848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.916994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.917007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.917145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.917157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.917288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.917300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.917441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.917460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.917563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.917575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.917673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.917685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.917817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.917829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.917975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.917986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.918067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.918078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.918298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.918310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.918520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.918533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.918631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.918644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.918712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.918722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.918802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.918812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.918977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.918989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.919139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.919151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.919329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.919341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.919424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.919435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.919648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.919660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.919826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.919837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.920050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.920062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.920154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.920167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.920243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.920253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.920317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.920328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.920483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.920496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.920596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.920609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.920678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.920689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.688 qpair failed and we were unable to recover it. 00:26:18.688 [2024-12-13 09:37:30.920860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.688 [2024-12-13 09:37:30.920873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.921013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.921026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.921179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.921191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.921386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.921398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.921587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.921599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.921684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.921696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.921920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.921932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.922171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.922184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.922312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.922324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.922519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.922531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.922616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.922627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.922708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.922719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.922797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.922808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.922903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.922913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.923110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.923122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.923204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.923215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.923410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.923421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.923588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.923601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.923832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.923844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.923990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.924002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.924200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.924211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.924381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.924393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.924485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.924496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.924643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.924655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.924727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.924739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.924811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.924822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.924898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.924908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.925099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.925110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.925312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.925323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.925457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.925470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.925573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.925585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.925691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.925703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.925860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.925872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.926022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.926033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.926210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.926224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.926469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.926482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.926745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.926758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.926857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.926869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.689 [2024-12-13 09:37:30.926939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.689 [2024-12-13 09:37:30.926949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.689 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.927039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.927052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.927128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.927139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.927272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.927284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.927417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.927429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.927589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.927603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.927694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.927706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.927849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.927861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.927943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.927954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.928118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.928131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.928312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.928324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.928421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.928432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.928525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.928537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.928673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.928684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.928774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.928785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.928880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.928891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.929025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.929036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.929165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.929176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.929260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.929270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.929427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.929439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.929539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.929551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.929656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.929666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.929745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.929756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.930002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.930014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.930217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.930228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.930386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.930398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.930476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.930487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.930551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.930562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.930759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.930771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.930902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.930913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.930985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.930995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.931167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.931179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.931257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.931268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.931342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.931354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.931453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.931465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.931596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.931608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.931677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.931689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.931786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.931799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.931954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.690 [2024-12-13 09:37:30.931966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.690 qpair failed and we were unable to recover it. 00:26:18.690 [2024-12-13 09:37:30.932133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.932145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.932247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.932259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.932457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.932469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.932658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.932670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.932864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.932875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.932958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.932968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.933217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.933229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.933369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.933381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.933612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.933624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.933852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.933864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.933996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.934007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.934264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.934276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.934440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.934462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.934558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.934569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.934667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.934679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.934830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.934841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.934929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.934940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.935078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.935090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.935234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.935247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.935477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.935490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.935563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.935574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.935709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.935722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.935865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.935876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.936027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.936039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.936200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.936212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.936337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.936349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.936492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.936505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.936601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.936612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.936783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.936795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.936880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.936892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.937104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.937115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.937308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.937320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.937524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.937536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.937735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.937747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.691 [2024-12-13 09:37:30.937892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.691 [2024-12-13 09:37:30.937903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.691 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.938031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.938042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.938191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.938202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.938416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.938431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.938645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.938657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.938869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.938881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.939012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.939025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.939272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.939285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.939515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.939528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.939699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.939711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.939906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.939917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.940058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.940070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.940213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.940225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.940361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.940373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.940601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.940613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.940696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.940706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.940871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.940882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.941102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.941114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.941186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.941196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.941343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.941354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.941444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.941462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.941563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.941575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.941727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.941739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.941880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.941892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.942050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.942063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.942226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.942238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.942381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.942394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.942582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.942595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.942736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.942748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.942945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.942956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.943208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.943220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.943359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.943371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.943522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.943535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.943731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.943743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.943965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.943978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.944120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.944132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.944280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.944291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.944429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.944441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.944606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.944619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.692 qpair failed and we were unable to recover it. 00:26:18.692 [2024-12-13 09:37:30.944759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.692 [2024-12-13 09:37:30.944772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.944935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.944948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.945037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.945048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.945205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.945217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.945414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.945428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.945578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.945590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.945796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.945808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.946034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.946047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.946206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.946218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.946426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.946438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.946551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.946564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.946661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.946673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.946852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.946864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.947003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.947015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.947093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.947104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.947244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.947257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.947407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.947420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.947585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.947598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.947851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.947863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.947992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.948005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.948219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.948231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.948378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.948390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.948489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.948502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.948595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.948607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.948780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.948793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.949004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.949016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.949157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.949169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.949318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.949330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.949534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.949546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.949720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.949732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.949870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.949882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.950059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.950071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.950292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.950304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.950382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.950394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.950592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.950606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.950831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.950843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.950984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.950997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.951245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.951257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.951332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.693 [2024-12-13 09:37:30.951342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.693 qpair failed and we were unable to recover it. 00:26:18.693 [2024-12-13 09:37:30.951563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.951576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.951738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.951751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.951883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.951895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.952044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.952056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.952309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.952321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.952539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.952554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.952721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.952733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.952942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.952954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.953088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.953100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.953313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.953325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.953519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.953532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.953730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.953742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.953894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.953906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.954168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.954180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.954313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.954326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.954503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.954516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.954735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.954748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.954883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.954896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.955042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.955055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.955200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.955213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.955433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.955445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.955603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.955616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.955712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.955725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.955873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.955885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.956037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.956050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.956148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.956161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.956326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.956339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.956423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.956435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.956608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.956621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.956781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.956794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.956897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.956910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.957075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.957088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.957222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.957235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.957456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.957470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.957602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.957615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.957696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.957708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.957801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.957814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.958010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.958023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.958157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.958170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.694 qpair failed and we were unable to recover it. 00:26:18.694 [2024-12-13 09:37:30.958327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-12-13 09:37:30.958339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.958569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.958583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.958779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.958793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.958924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.958937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.959135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.959147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.959339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.959352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.959551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.959567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.959772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.959785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.959986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.959999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.960178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.960191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.960397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.960410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.960562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.960575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.960824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.960837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.961062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.961074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.961299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.961312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.961531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.961545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.961688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.961702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.961836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.961848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.961939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.961950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.962113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.962125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.962271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.962283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.962503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.962517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.962734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.962748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.962891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.962903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.963120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.963133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.963273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.963286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.963426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.963438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.963559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.963573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.963651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.963663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.963879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.963891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.964091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.964103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.964262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.964275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.964358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.964369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.964456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.964468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.695 qpair failed and we were unable to recover it. 00:26:18.695 [2024-12-13 09:37:30.964687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.695 [2024-12-13 09:37:30.964700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.964867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.964880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.965026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.965040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.965171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.965184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.965445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.965463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.965689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.965702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.965852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.965865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.965939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.965950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.966151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.966164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.966246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.966257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.966405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.966417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.966559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.966573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.966771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.966786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.966983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.966996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.967174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.967187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.967418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.967431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.967502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.967515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.967681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.967694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.967835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.967848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.968054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.968067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.968328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.968342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.968485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.968498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.968587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.968601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.968680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.968691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.968848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.968860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.969007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.969019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.969290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.969304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.969575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.969588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.969824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.969836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.969999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.970011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.970253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.970266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.970394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.970406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.970541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.970554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.970701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.970714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.970879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.970891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.971022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.971034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.971241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.971254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.971390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.971403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.971552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.696 [2024-12-13 09:37:30.971565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.696 qpair failed and we were unable to recover it. 00:26:18.696 [2024-12-13 09:37:30.971723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.971735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.971904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.971917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.971998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.972009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.972180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.972192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.972351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.972363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.972607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.972620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.972753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.972765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.972895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.972907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.973193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.973206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.973347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.973360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.973437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.973454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.973528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.973539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.973613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.973623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.973820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.973837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.973984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.973996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.974149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.974161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.974318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.974330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.974492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.974504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.974677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.974689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.974887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.974899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.975131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.975143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.975354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.975367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.975590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.975603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.975694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.975705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.975853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.975865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.976008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.976020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.976188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.976201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.976381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.976394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.976473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.976484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.976646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.976658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.976765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.976777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.976927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.976943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.977013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.977024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.977220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.977232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.977374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.977386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.977463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.977474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.977631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.977643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.977803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.697 [2024-12-13 09:37:30.977816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.697 qpair failed and we were unable to recover it. 00:26:18.697 [2024-12-13 09:37:30.978037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.978049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.978203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.978215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.978360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.978372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.978554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.978567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.978646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.978657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.978824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.978836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.978986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.978998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.979171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.979183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.979338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.979350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.979492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.979505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.979644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.979656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.979804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.979816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.979893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.979904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.980146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.980158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.980289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.980301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.980523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.980539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.980628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.980639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.980733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.980745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.980875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.980887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.981028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.981040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.981279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.981291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.981484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.981497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.981581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.981593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.981696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.981706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.981768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.981779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.981927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.981939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.982017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.982028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.982114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.982125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.982267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.982279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.982373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.982385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.982563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.982576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.982734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.982745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.982938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.982950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.983187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.983198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.983337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.983349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.983488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.983501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.983647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.983659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.983758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.983770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.698 [2024-12-13 09:37:30.983859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.698 [2024-12-13 09:37:30.983870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.698 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.983953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.983964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.984198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.984210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.984354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.984367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.984515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.984527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.984612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.984622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.984696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.984706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.984794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.984805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.984886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.984896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.985153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.985165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.985383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.985395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.985587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.985599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.985671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.985682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.985947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.985959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.986112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.986124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.986287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.986300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.986384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.986394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.986604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.986619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.986769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.986782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.986944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.986956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.987111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.987124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.987366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.987378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.987467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.987479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.987614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.987625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.987722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.987734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.987867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.987879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.988016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.988028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.988102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.988113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.988243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.988255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.988417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.988429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.988588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.988600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.988690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.988700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.988860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.988872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.988953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.988964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.989114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.989126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.989262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.989274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.989455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.989467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.989626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.989638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.989784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.989796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.699 [2024-12-13 09:37:30.989991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.699 [2024-12-13 09:37:30.990002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.699 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.990305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.990316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.990393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.990404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.990490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.990501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.990706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.990718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.990872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.990884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.991039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.991051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.991246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.991258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.991335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.991346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.991436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.991446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.991626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.991638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.991720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.991729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.991816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.991827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.991973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.991985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.992132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.992144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.992335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.992346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.992491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.992504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.992569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.992579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.992792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.992806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.992900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.992912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.992987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.992998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.993183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.993195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.993419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.993431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.993514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.993525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.993600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.993610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.993741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.993751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.993891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.993902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.994156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.994168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.994338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.994350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.994487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.994500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.994638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.994651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.994735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.994745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.994814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.994825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.994993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.995004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.995175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.700 [2024-12-13 09:37:30.995187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.700 qpair failed and we were unable to recover it. 00:26:18.700 [2024-12-13 09:37:30.995274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.995284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.995378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.995389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.995471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.995482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.995573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.995584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.995652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.995662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.995831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.995843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.996023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.996035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.996203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.996215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.996418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.996430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.996652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.996664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.996742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.996754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.996890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.996902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.997045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.997057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.997205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.997216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.997430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.997442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.997533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.997543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.997697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.997709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.997902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.997914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.997991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.998001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.998260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.998271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.998483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.998497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.998649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.998662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.998743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.998753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.998850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.998862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.998945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.998956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.999190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.999202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.999361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.999373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.999504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.999517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.999599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.999610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.999760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.999771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:30.999870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:30.999883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:31.000127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:31.000139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:31.000289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:31.000300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:31.000381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:31.000392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:31.000616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:31.000629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:31.000828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:31.000840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:31.000987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:31.001000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:31.001161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:31.001174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:31.001368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.701 [2024-12-13 09:37:31.001380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.701 qpair failed and we were unable to recover it. 00:26:18.701 [2024-12-13 09:37:31.001516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.702 [2024-12-13 09:37:31.001529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.702 qpair failed and we were unable to recover it. 00:26:18.702 [2024-12-13 09:37:31.001617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.702 [2024-12-13 09:37:31.001627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.702 qpair failed and we were unable to recover it. 00:26:18.702 [2024-12-13 09:37:31.001694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.702 [2024-12-13 09:37:31.001717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.702 qpair failed and we were unable to recover it. 00:26:18.702 [2024-12-13 09:37:31.001854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.702 [2024-12-13 09:37:31.001866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.702 qpair failed and we were unable to recover it. 00:26:18.702 [2024-12-13 09:37:31.001951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.702 [2024-12-13 09:37:31.001962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.702 qpair failed and we were unable to recover it. 00:26:18.702 [2024-12-13 09:37:31.002047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.702 [2024-12-13 09:37:31.002057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.702 qpair failed and we were unable to recover it. 00:26:18.702 [2024-12-13 09:37:31.002233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.702 [2024-12-13 09:37:31.002244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.702 qpair failed and we were unable to recover it. 00:26:18.702 [2024-12-13 09:37:31.002486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.702 [2024-12-13 09:37:31.002499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.702 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.002628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.002641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.002722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.002736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.002949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.002962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.003156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.003182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.003324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.003336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.003582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.003595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.003680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.003692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.003873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.003886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.003975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.003988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.004079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.004091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.004218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.004230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.004308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.004319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.004562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.004574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.004657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.004668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.004733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.004744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.004808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.004820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.004965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.004978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.005122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.005134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.005283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.997 [2024-12-13 09:37:31.005296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.997 qpair failed and we were unable to recover it. 00:26:18.997 [2024-12-13 09:37:31.005492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.005505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.005599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.005610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.005775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.005787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.006010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.006022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.006102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.006112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.006246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.006258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.006394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.006405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.006563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.006576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.006705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.006717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.006880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.006892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.007192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.007204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.007360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.007372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.007466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.007477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.007568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.007580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.007813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.007825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.008052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.008064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.008211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.008223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.008378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.008390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.008533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.008546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.008690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.008701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.008783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.008794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.008950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.008962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.009117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.009129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.009261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.009273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.009434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.009465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.009631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.009643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.998 qpair failed and we were unable to recover it. 00:26:18.998 [2024-12-13 09:37:31.009726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.998 [2024-12-13 09:37:31.009738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.009935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.009947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.010270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.010282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.010480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.010493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.010709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.010721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.010849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.010861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.011005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.011017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.011172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.011184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.011399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.011412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.011571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.011584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.011682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.011693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.011850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.011862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.012009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.012022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.012190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.012202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.012437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.012460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.012637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.012650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.012791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.012802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.012932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.012944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.013200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.013214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.013282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.013293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.013430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.013441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.013537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.013549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.013707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.013719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.013795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.013806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.013950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.013963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.014097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.014108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.014253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.014265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:18.999 [2024-12-13 09:37:31.014487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.999 [2024-12-13 09:37:31.014500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:18.999 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.014576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.014589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.014734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.014746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.014898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.014910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.014988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.015000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.015200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.015212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.015377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.015388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.015536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.015548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.015674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.015685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.015904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.015916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.016120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.016132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.016286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.016300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.016503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.016515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.016658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.016670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.016831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.016842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.017037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.017048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.017201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.017234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.017425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.017487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.017622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.017655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.017771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.017802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.017912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.017946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.018089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.018122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.018314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.018347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.018599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.018633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.018848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.018881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.019022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.019055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.019185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.019217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.000 [2024-12-13 09:37:31.019478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.000 [2024-12-13 09:37:31.019514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.000 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.019637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.019670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.019858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.019890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.020072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.020109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.020327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.020339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.020481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.020494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.020641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.020674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.020813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.020845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.021043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.021076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.021274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.021307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.021585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.021620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.021810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.021844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.022117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.022150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.022328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.022362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.022508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.022543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.022668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.022701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.022875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.022907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.023137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.023170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.023385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.023418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.023647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.023681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.023868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.023902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.024099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.024131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.024341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.024374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.024579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.024592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.024746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.024761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.024966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.024999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.025256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.025290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.025513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.001 [2024-12-13 09:37:31.025548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.001 qpair failed and we were unable to recover it. 00:26:19.001 [2024-12-13 09:37:31.025723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.025757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.025971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.026005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.026198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.026231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.026496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.026529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.026675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.026687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.026905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.026916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.027022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.027033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.027263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.027275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.027420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.027431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.027595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.027608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.027779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.027791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.027942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.027954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.028228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.028261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.028386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.028398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.028495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.028506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.028633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.028645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.028741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.028753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.028910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.028943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.029090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.029123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.029328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.029360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.029575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.029587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.029746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.029758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.029844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.029856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.030048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.030059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.030272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.030305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.030507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.030542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.030746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.002 [2024-12-13 09:37:31.030779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.002 qpair failed and we were unable to recover it. 00:26:19.002 [2024-12-13 09:37:31.030925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.030957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.031213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.031246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.031436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.031480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.031617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.031650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.031778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.031811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.032055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.032086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.032278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.032312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.032517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.032529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.032682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.032694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.032829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.032843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.032974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.033007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.033200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.033234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.033483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.033518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.033698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.033709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.033924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.033957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.034238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.034271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.034574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.034608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.034781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.034815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.035016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.035048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.035318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.035350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.035619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.035654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.003 [2024-12-13 09:37:31.035843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.003 [2024-12-13 09:37:31.035875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.003 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.036000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.036034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.036264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.036297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.036490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.036524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.036699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.036733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.036922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.036954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.037147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.037179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.037393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.037406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.037591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.037604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.037762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.037774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.037907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.037918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.038013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.038025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.038207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.038230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.038314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.038325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.038478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.038490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.038584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.038594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.038697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.038710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.038778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.038789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.038886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.038897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.038998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.039029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.039233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.039266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.039517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.039528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.039676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.039689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.039837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.039849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.039928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.039939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.040023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.040034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.040238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.040283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.040498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.004 [2024-12-13 09:37:31.040533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.004 qpair failed and we were unable to recover it. 00:26:19.004 [2024-12-13 09:37:31.040666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.040704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.040944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.040977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.041195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.041228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.041494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.041528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.041741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.041754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.041889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.041901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.041980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.041991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.042155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.042187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.042378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.042390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.042520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.042549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.042736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.042769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.042973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.043006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.043284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.043317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.043464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.043498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.043697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.043731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.043863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.043895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.044137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.044171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.044468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.044502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.044702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.044715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.044939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.044951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.045171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.045182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.045374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.045387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.045633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.045666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.045861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.045895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.046137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.046169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.046382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.046415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.046547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.005 [2024-12-13 09:37:31.046559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.005 qpair failed and we were unable to recover it. 00:26:19.005 [2024-12-13 09:37:31.046795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.046808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.046961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.046973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.047055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.047066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.047183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.047193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.047420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.047465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.047614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.047647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.047843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.047875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.048135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.048168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.048322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.048354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.048649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.048684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.048873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.048906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.049094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.049128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.049394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.049427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.049627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.049667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.049909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.049941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.050165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.050199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.050376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.050408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.050580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.050592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.050766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.050798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.051004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.051037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.051325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.051358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.051497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.051531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.051730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.051763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.052021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.052054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.052342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.052375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.052556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.052591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.052881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.052915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.006 qpair failed and we were unable to recover it. 00:26:19.006 [2024-12-13 09:37:31.053054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.006 [2024-12-13 09:37:31.053087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.053274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.053307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.053521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.053555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.053796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.053828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.054023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.054057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.054231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.054264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.054384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.054417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.054752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.054792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.054982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.055002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.055183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.055217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.055409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.055443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.055739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.055774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.055932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.055966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.056262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.056335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.056554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.056595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.056779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.056812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.056950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.056983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.057118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.057151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.057291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.057324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.057501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.057535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.057683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.057717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.057891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.057925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.058062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.058096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.058313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.058346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.058476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.058495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.058675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.058708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.058910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.058953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.007 [2024-12-13 09:37:31.059214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.007 [2024-12-13 09:37:31.059247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.007 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.059423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.059469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.059751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.059784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.059979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.060013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.060195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.060227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.060415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.060433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.060602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.060642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.060909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.060944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.061246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.061280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.061415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.061458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.061651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.061684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.061960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.061992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.062185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.062218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.062475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.062504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.062667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.062701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.062944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.062976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.063223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.063255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.063564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.063598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.063849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.063881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.064099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.064132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.064269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.064303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.064606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.064640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.064887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.064904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.065121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.065139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.065348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.065365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.065657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.065690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.065890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.065929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.008 [2024-12-13 09:37:31.066077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.008 [2024-12-13 09:37:31.066110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.008 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.066319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.066335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.066429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.066445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.066634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.066652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.066888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.066921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.067108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.067141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.067339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.067371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.067489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.067506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.067738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.067756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.067974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.067992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.068184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.068202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.068439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.068461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.068561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.068582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.068783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.068815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.069010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.069041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.069287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.069320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.069502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.069521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.069698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.069731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.069855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.069887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.070123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.070156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.070353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.070385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.070706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.070741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.070984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.071017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.071216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.071249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.071498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.071532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.071682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.071699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.009 qpair failed and we were unable to recover it. 00:26:19.009 [2024-12-13 09:37:31.071790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.009 [2024-12-13 09:37:31.071806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.071899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.071915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.072094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.072112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.072322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.072339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.072545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.072564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.072726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.072758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.072880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.072914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.073182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.073215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.073482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.073500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.073765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.073782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.073928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.073961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.074178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.074211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.074454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.074472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.074583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.074624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.074737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.074757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.074920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.074939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.075095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.075113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.075282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.075315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.075600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.075636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.075836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.075870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.076063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.076096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.076340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.076374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.076563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.076581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.076737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.010 [2024-12-13 09:37:31.076772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.010 qpair failed and we were unable to recover it. 00:26:19.010 [2024-12-13 09:37:31.076986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.077019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.077199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.077232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.077348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.077371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.077490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.077509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.077676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.077709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.077855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.077887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.078012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.078045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.078180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.078213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.078391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.078424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.078629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.078662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.078872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.078905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.079209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.079243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.079505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.079523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.079732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.079750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.079982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.080000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.080199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.080232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.080429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.080472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.080724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.080757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.080957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.080989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.081168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.081201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.081401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.081418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.081613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.081646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.081847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.081880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.082161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.082195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.082439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.082479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.082622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.082656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.082844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.082862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.082958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.083003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.083277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.083310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.011 [2024-12-13 09:37:31.083544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.011 [2024-12-13 09:37:31.083564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.011 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.083726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.083744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.083864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.083882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.083991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.084009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.084220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.084252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.084529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.084565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.084703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.084719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.084879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.084896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.085080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.085097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.085241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.085258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.085496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.085515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.085633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.085651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.085810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.085828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.085940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.085960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.086157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.086175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.086322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.086339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.086598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.086618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.086718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.086738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.086856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.086875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.086969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.086985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.087160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.087194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.087441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.087495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.087691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.087725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.087909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.087944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.088091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.088125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.088338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.088380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.088594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.088613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.088758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.088776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.088935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.088975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.089192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.012 [2024-12-13 09:37:31.089225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.012 qpair failed and we were unable to recover it. 00:26:19.012 [2024-12-13 09:37:31.089358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.089392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.089595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.089614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.089828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.089848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.090003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.090024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.090274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.090293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.090477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.090496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.090655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.090674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.090776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.090810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.090950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.090996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.091331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.091370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.091630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.091651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.091887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.091905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.092003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.092021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.092211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.092229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.092462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.092480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.092694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.092712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.092876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.092894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.093214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.093231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.093465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.093483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.093597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.093615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.093704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.093721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.093897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.093916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.094087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.094105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.094193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.094213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.094383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.094401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.094630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.094648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.094811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.094829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.095010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.095028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.095182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.095199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.013 [2024-12-13 09:37:31.095375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.013 [2024-12-13 09:37:31.095392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.013 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.095572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.095590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.095690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.095705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.095861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.095878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.095973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.095990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.096248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.096268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.096502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.096522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.096666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.096683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.096845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.096864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.097075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.097093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.097301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.097319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.097528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.097546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.097632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.097648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.097884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.097901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.098060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.098078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.098253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.098272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.098487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.098504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.098591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.098608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.098771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.098788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.099020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.099038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.099256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.099274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.099423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.099443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.099637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.099655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.099774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.099791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.099890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.099905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.100085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.100103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.100305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.100324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.100536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.100554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.100702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.100721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.100828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.014 [2024-12-13 09:37:31.100844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.014 qpair failed and we were unable to recover it. 00:26:19.014 [2024-12-13 09:37:31.101000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.101019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.101172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.101189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.101270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.101286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.101381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.101398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.101560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.101578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.101754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.101772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.101882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.101897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.102116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.102133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.102292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.102310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.102495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.102515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.102671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.102689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.102851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.102868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.103034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.103051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.103203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.103221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.103372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.103390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.103550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.103568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.103718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.103735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.103970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.103987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.104158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.104177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.104409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.104427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.104677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.104697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.104934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.104953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.105138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.105155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.105363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.105381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.105618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.105637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.105902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.105919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.106124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.015 [2024-12-13 09:37:31.106142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.015 qpair failed and we were unable to recover it. 00:26:19.015 [2024-12-13 09:37:31.106383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.106400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.106659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.106678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.106913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.106932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.107158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.107177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.107274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.107296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.107393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.107410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.107509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.107528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.107630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.107648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.107794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.107812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.107967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.107985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.108221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.108239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.108399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.108417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.108604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.108622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.108808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.108825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.109020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.109037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.109135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.109152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.109364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.109381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.109471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.109488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.109653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.109672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.109851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.109868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.110031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.110048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.110280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.110297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.110405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.110421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.110658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.110676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.110832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.110849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.111104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.111120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.111376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.016 [2024-12-13 09:37:31.111393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.016 qpair failed and we were unable to recover it. 00:26:19.016 [2024-12-13 09:37:31.111621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.111638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.111801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.111819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.111975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.111992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.112202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.112220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.112310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.112325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.112412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.112430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.112636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.112677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.112843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.112874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.113050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.113065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.113206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.113220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.113468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.113481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.113675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.113688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.113833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.113845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.113921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.113932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.114155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.114168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.114250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.114262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.114462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.114476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.114606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.114624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.114857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.114869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.114959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.114970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.115200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.115214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.115376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.115388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.115588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.115602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.115676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.115687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.115767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.115778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.115944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.115957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.116031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.116043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.116139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.116150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.116224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.116235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.116400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.116412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.116565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.116578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.017 [2024-12-13 09:37:31.116730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.017 [2024-12-13 09:37:31.116743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.017 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.116903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.116916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.117112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.117125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.117351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.117363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.117443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.117459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.117560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.117572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.117732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.117745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.117920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.117932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.118096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.118109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.118203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.118214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.118287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.118299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.118546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.118559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.118706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.118719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.118878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.118890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.119102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.119115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.119188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.119199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.119440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.119458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.119532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.119543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.119715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.119728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.119894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.119906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.119994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.120005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.120164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.120177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.120376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.120388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.120667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.120680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.120743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.120755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.120921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.120933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.121002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.121016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.121119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.121131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.121258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.121271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.121429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.121442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.121583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.121596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.121818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.121831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.122077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.122090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.018 [2024-12-13 09:37:31.122223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.018 [2024-12-13 09:37:31.122236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.018 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.122388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.122401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.122585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.122599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.122822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.122834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.123024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.123037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.123202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.123214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.123436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.123454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.123569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.123583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.123742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.123754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.123899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.123912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.124056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.124069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.124227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.124240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.124411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.124424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.124565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.124578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.124774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.124787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.124937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.124951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.125118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.125131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.125210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.125222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.125375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.125388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.125587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.125600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.125772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.125785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.126012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.126025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.126198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.126210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.126372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.126385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.126537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.126551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.126822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.126835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.126919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.126930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.126997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.127009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.127098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.127110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.127273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.127286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.127505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.127519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.127714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.127727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.127878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.127891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.128072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.128088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.128232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.128245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.128445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.128467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.128633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.128647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.128787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.128800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.128998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.129011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.129257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.019 [2024-12-13 09:37:31.129269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.019 qpair failed and we were unable to recover it. 00:26:19.019 [2024-12-13 09:37:31.129492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.129505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.129669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.129682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.129854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.129867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.130069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.130082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.130216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.130228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.130391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.130404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.130617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.130633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.130792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.130805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.130888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.130900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.131070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.131083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.131230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.131243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.131407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.131420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.131570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.131585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.131785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.131798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.132024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.132037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.132219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.132231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.132428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.132441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.132694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.132707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.132783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.132795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.133022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.133035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.133241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.133255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.133342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.133354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.133552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.133565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.133632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.133644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.133780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.133791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.133943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.133955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.134177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.134191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.134334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.134347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.134519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.134533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.134670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.134683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.134820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.134833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.134984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.134997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.135127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.135141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.135291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.135306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.135386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.135398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.135541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.135554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.135789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.135802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.020 [2024-12-13 09:37:31.136031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.020 [2024-12-13 09:37:31.136044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.020 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.136174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.136187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.136271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.136282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.136481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.136494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.136639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.136652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.136718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.136729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.136869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.136882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.137034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.137047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.137204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.137217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.137378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.137391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.137473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.137485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.137620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.137633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.137858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.137872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.138041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.138054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.138198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.138211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.138386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.138399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.138547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.138561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.138629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.138643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.138892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.138905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.139060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.139073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.139160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.139172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.139263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.139277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.139474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.139487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.139647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.139661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.139833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.139846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.140043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.140056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.140260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.140273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.140342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.140354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.140634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.140648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.140796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.140809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.140961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.140975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.141056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.141068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.141147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.141159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.141292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.141305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.141459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.141472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.141676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.141690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.021 [2024-12-13 09:37:31.141906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.021 [2024-12-13 09:37:31.141922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.021 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.142136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.142149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.142303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.142316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.142529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.142543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.142622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.142633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.142783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.142796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.142872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.142884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.143102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.143115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.143281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.143293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.143503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.143516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.143693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.143706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.143836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.143849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.144055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.144068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.144197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.144210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.144358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.144371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.144587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.144600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.144688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.144702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.144933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.144947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.145017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.145028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.145182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.145195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.145405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.145418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.145562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.145575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.145724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.145737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.145882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.145895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.146050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.146062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.146212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.146225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.146435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.146454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.146536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.146548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.146757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.146770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.146912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.146925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.147064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.147076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.147162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.147174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.147319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.147351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.147538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.147573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.147767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.147813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.148094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.148127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.148322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.148355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.148547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.148581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.148699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.148711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.148872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.148885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.022 qpair failed and we were unable to recover it. 00:26:19.022 [2024-12-13 09:37:31.148969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.022 [2024-12-13 09:37:31.148983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.149178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.149191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.149259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.149270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.149472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.149486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.149621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.149634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.149718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.149729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.149998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.150010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.150100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.150113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.150381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.150393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.150551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.150563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.150694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.150707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.150849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.150862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.151025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.151036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.151176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.151188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.151331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.151344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.151563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.151575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.151776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.151788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.151876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.151890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.152034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.152046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.152210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.152222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.152294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.152305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.152385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.152396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.152531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.152544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.152674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.152686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.152830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.152842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.153017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.153029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.153164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.153176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.153345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.153357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.153434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.153445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.153627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.153639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.153878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.153891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.154065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.154078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.154268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.154301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.154437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.154483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.154700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.154733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.154863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.154894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.155026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.155059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.155192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.155226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.155507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.023 [2024-12-13 09:37:31.155541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.023 qpair failed and we were unable to recover it. 00:26:19.023 [2024-12-13 09:37:31.155745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.155778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.155909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.155955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.156200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.156212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.156408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.156420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.156615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.156628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.156826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.156838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.156983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.156995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.157068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.157079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.157250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.157262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.157420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.157433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.157609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.157623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.157685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.157696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.157861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.157874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.158022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.158035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.158213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.158225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.158444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.158463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.158577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.158589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.158745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.158757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.158840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.158851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.158998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.159010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.159173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.159207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.159495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.159530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.159797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.159832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.160080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.160092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.160261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.160273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.160506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.160519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.160649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.160661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.160820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.160832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.160966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.160978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.161114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.161126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.161280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.161292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.161447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.161463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.161701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.161713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.161884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.161896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.161982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.161992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.162083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.162095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.162174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.162184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.162325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.162336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.162511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.162523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.162712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.162724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.162808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.162819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.162897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.024 [2024-12-13 09:37:31.162910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.024 qpair failed and we were unable to recover it. 00:26:19.024 [2024-12-13 09:37:31.163005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.163016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.163083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.163093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.163233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.163244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.163318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.163329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.163597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.163610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.163684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.163695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.163829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.163841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.163907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.163918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.164142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.164154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.164375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.164387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.164535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.164549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.164758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.164792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.164919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.164951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.165182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.165215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.165407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.165440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.165651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.165684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.165919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.165930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.166091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.166102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.166194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.166207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.166282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.166293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.166494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.166506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.166765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.166777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.166941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.166953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.167112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.167124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.167339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.167351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.167486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.167498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.167592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.167602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.167770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.167781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.167856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.167867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.167958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.167968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.168135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.168147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.168292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.168304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.168478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.168490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.168561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.168572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.168642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.168653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.168787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.168799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.168870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.025 [2024-12-13 09:37:31.168881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.025 qpair failed and we were unable to recover it. 00:26:19.025 [2024-12-13 09:37:31.168980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.168990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.169078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.169089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.169221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.169235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.169374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.169386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.169551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.169563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.169738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.169750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.169826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.169837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.169999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.170011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.170168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.170180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.170377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.170389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.170526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.170538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.170620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.170630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.170707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.170718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.170796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.170807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.170939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.170949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.171046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.171057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.171231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.171243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.171368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.171380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.171526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.171539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.171696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.171708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.171850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.171862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.171990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.172001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.172216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.172229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.172473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.172486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.172587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.172601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.172766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.172778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.173019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.173031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.173182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.173193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.173288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.173298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.173441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.173458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.173609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.173621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.173718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.173730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.173914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.173926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.174076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.174087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.174178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.174190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.174350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.174361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.174433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.174444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.174538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.174549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.174611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.174621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.174704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.174715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.174802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.174814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.174893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.174904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.174977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.174989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.175151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.175163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.175307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.026 [2024-12-13 09:37:31.175320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.026 qpair failed and we were unable to recover it. 00:26:19.026 [2024-12-13 09:37:31.175464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.175476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.175561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.175573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.175674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.175687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.175846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.175857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.175961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.175995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.176223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.176255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.176372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.176405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.176660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.176672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.176766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.176778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.176931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.176942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.177019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.177029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.177123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.177137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.177375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.177387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.177470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.177481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.177573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.177585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.177661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.177673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.177817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.177848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.178127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.178160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.178278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.178311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.178513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.178548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.178810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.178823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.178971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.179002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.179196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.179229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.179424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.179469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.179622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.179655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.179826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.179837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.179916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.179926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.179994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.180004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.180236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.180248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.180327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.180337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.180426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.180438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.180508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.180518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.180688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.180729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.180878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.180898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.181071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.181090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.181243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.181260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.181472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.181491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.181632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.181650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.181757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.181775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.181860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.181875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.182032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.182049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.182137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.182155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.182304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.182321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.182397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.182409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.182541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.027 [2024-12-13 09:37:31.182553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.027 qpair failed and we were unable to recover it. 00:26:19.027 [2024-12-13 09:37:31.182644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.182655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.182737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.182749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.182951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.182963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.183096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.183107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.183302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.183313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.183445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.183468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.183627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.183639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.183852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.183863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.183947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.183957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.184190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.184201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.184345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.184358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.184521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.184533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.184628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.184641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.184783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.184795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.184937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.184950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.185101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.185113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.185329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.185341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.185416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.185428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.185629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.185641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.185786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.185800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.185955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.185967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.186055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.186065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.186131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.186142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.186281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.186294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.186502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.186515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.186645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.186658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.186802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.186814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.186908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.186920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.187193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.187205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.187353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.187364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.187544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.187556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.187657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.187670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.187865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.187877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.188019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.188031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.188271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.188284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.188420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.188432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.188652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.188664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.188818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.188831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.188936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.188948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.189194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.189206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.189411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.189423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.189498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.189509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.189585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.189595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.189680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.189692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.189852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.028 [2024-12-13 09:37:31.189864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.028 qpair failed and we were unable to recover it. 00:26:19.028 [2024-12-13 09:37:31.190006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.190018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.190175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.190199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.190413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.190431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.190602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.190621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.190824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.190842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.191054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.191072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.191232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.191249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.191340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.191357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.191537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.191556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.191715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.191730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.191812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.191823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.191948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.191960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.192234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.192245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.192377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.192390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.192585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.192598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.192681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.192692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.192842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.192855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.193001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.193013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.193202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.193214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.193428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.193439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.193535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.193546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.193643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.193656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.193737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.193749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.193885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.193897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.193970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.193980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.194190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.194201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.194396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.194408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.194627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.194639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.194782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.194795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.194967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.194979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.195071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.195083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.195319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.195332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.195575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.195589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.195676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.195688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.195956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.195967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.196155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.196168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.196237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.196248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.196466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.196477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.196632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.029 [2024-12-13 09:37:31.196646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.029 qpair failed and we were unable to recover it. 00:26:19.029 [2024-12-13 09:37:31.196732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.196742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.196967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.196978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.197109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.197124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.197285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.197297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.197434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.197446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.197547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.197559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.197707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.197719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.197922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.197935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.198079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.198091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.198253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.198266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.198445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.198463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.198596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.198609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.198805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.198818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.199015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.199027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.199180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.199193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.199412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.199425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.199517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.199529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.199694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.199707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.199864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.199877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.200040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.200052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.200128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.200140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.200236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.200248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.200466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.200479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.200573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.200586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.200664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.200678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.200829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.200841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.200937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.200950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.201092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.201105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.201189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.201200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.201359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.201372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.201570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.201583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.201716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.201728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.201995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.202008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.202179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.202191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.202335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.202348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.202490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.202504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.202652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.202664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.202808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.202820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.203016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.203028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.203110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.203122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.203189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.203200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.203290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.203303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.203438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.203459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.203671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.203683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.203827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.203841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.204063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.204075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.204220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.204233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.204385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.030 [2024-12-13 09:37:31.204397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.030 qpair failed and we were unable to recover it. 00:26:19.030 [2024-12-13 09:37:31.204590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.204603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.204686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.204697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.204923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.204936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.205142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.205156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.205371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.205383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.205565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.205578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.205769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.205782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.206044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.206056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.206278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.206291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.206374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.206385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.206472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.206485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.206629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.206642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.206720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.206733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.206880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.206893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.207047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.207062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.207287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.207301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.207444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.207462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.207645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.207658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.207740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.207751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.207899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.207913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.208087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.208100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.208329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.208342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.208492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.208505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.208587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.208599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.208758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.208771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.208934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.208947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.209144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.209157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.209354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.209367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.209500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.209514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.209607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.209619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.209764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.209777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.209871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.209884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.210013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.210026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.210256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.210269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.210452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.210469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.210604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.210616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.210776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.210790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.210925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.210938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.211035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.211050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.211182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.211195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.211327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.211340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.211486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.211500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.211589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.211603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.211758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.211772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.211843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.211855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.212078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.212091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.212334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.212349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.031 [2024-12-13 09:37:31.212498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.031 [2024-12-13 09:37:31.212512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.031 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.212694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.212708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.212781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.212793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.213017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.213030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.213180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.213192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.213342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.213354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.213493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.213506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.213665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.213677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.213824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.213836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.214075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.214087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.214231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.214243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.214386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.214399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.214558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.214570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.214723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.214736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.214814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.214826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.215046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.215059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.215133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.215144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.215304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.215317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.215456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.215469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.215612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.215625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.215717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.215729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.215808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.215819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.215960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.215973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.216047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.216058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.216257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.216270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.216402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.216415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.216502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.216515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.216647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.216663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.216809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.216821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.216912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.216924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.217067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.217080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.217212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.217225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.217297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.217309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.217401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.217415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.217553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.217566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.217638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.217650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.217801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.217815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.217946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.217958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.218042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.218054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.218142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.218157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.218230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.218243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.218391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.218403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.218470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.218492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.218587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.032 [2024-12-13 09:37:31.218601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.032 qpair failed and we were unable to recover it. 00:26:19.032 [2024-12-13 09:37:31.218681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.218693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.218836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.218849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.218986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.219000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.219084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.219098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.219254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.219268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.219340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.219352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.219488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.219502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.219589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.219602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.219738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.219751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.219949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.219962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.220125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.220138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.220225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.220236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.220382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.220395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.220595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.220609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.220748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.220761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.220903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.220916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.221006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.221019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.221172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.221185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.221335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.221348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.221430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.221443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.221598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.221611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.221776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.221789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.221938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.221950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.222075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.222091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.222269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.222281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.222415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.222428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.222516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.222528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.222665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.222679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.222809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.222823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.222895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.222906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.223042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.223055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.223142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.223154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.223236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.223247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.223334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.223347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.223434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.223452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.223525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.223536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.223616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.223628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.223699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.223711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.223844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.223858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.223924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.223936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.224010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.224022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.224093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.224104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.224236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.224248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.224386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.224399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.224555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.224570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.224630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.033 [2024-12-13 09:37:31.224643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.033 qpair failed and we were unable to recover it. 00:26:19.033 [2024-12-13 09:37:31.224721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.224734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.224863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.224876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.225012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.225024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.225177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.225190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.225383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.225422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.225627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.225651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.225747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.225765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.225847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.225864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.226011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.226030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.226165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.226183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.226258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.226275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.226505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.226523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.226605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.226619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.226786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.226799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.227010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.227023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.227106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.227118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.227288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.227301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.227429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.227442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.227583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.227596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.227742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.227754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.227902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.227915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.228113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.228126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.228317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.228330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.228519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.228533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.228627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.228640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.228718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.228732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.228882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.228895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.228979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.228991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.229120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.229133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.229351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.229364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.229515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.229528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.229674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.229688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.229768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.229782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.229861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.229874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.229945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.229957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.230102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.230117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.230190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.230204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.230298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.230311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.230511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.230525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.230602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.230615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.230684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.230696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.230840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.230853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.231006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.231018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.231161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.231175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.231251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.231265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.034 [2024-12-13 09:37:31.231331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.034 [2024-12-13 09:37:31.231342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.034 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.231491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.231506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.231643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.231655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.231853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.231866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.231948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.231960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.232088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.232101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.232243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.232256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.232334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.232347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.232424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.232436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.232585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.232598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.232731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.232744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.232874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.232887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.233095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.233108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.233191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.233203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.233440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.233459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.233538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.233551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.233734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.233747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.233968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.233982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.234111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.234124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.234210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.234222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.234309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.234321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.234542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.234555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.234821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.234834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.234984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.234997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.235129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.235142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.235221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.235234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.235388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.235401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.235532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.235546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.235675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.235688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.235885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.235898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.235975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.235987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.236116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.236129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.236306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.236320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.236464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.236478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.236702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.236715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.236901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.236914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.237140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.237154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.237359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.237373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.237454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.237468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.237614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.237632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.237787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.237800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.237930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.237942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.238088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.238101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.035 [2024-12-13 09:37:31.238270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.035 [2024-12-13 09:37:31.238282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.035 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.238419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.238432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.238632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.238656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.238798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.238831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.239017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.239050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.239260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.239293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.239474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.239509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.239777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.239810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.239940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.239952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.240096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.240108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.240245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.240257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.240467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.240502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.240692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.240724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.240914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.240948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.241200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.241212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.241378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.241413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.241600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.241634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.241836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.241868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.242139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.242162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.242362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.242374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.242532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.242545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.242741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.242753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.242918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.242950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.243078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.243111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.243314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.243346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.243550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.243585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.243834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.243847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.243931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.243943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.244077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.244088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.244317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.244349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.244600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.244634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.244850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.244883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.245054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.245078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.245262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.245295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.245555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.245589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.245836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.245867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.246120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.246158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.246368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.246401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.036 qpair failed and we were unable to recover it. 00:26:19.036 [2024-12-13 09:37:31.246649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.036 [2024-12-13 09:37:31.246682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.246800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.246812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.246983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.247015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.247274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.247307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.247490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.247524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.247702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.247736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.247924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.247936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.248065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.248087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.248247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.248258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.248394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.248406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.248487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.248498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.248599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.248630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.248895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.248928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.249132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.249166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.249406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.249418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.249621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.249633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.249857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.249890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.250103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.250135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.250372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.250406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.250624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.250658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.250914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.250948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.251235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.251268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.251541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.251575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.251783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.251816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.252054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.252066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.252222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.252255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.252520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.252553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.252673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.252706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.252971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.253004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.253300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.253333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.253464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.253499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.253611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.253644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.253844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.253876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.254140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.254152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.254294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.254306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.254532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.254567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.254837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.254870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.255078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.255111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.255281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.255295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.255465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.255499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.255755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.255787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.255922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.255953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.256221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.256234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.037 [2024-12-13 09:37:31.256379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.037 [2024-12-13 09:37:31.256391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.037 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.256540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.256553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.256706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.256718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.256966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.256998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.257205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.257238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.257475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.257509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.257775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.257809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.257995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.258028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.258223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.258255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.258384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.258417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.258702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.258776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.258940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.258985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.259217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.259235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.259468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.259486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.259724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.259742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.259974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.259992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.260149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.260167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.260314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.260347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.260589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.260624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.260808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.260842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.261071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.261088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.261174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.261189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.261284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.261297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.261460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.261494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.261672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.261704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.261880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.261913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.262177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.262189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.262350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.262362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.262457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.262468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.262617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.262628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.262829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.262861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.262981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.263015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.263279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.263312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.263490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.263524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.263712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.263754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.263898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.263912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.264141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.264174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.264369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.264401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.264536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.264570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.264759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.264791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.264930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.264962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.265199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.265231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.265514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.265548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.265821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.265853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.266072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.038 [2024-12-13 09:37:31.266105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.038 qpair failed and we were unable to recover it. 00:26:19.038 [2024-12-13 09:37:31.266324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.266357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.266491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.266526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.266770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.266803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.267091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.267124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.267394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.267427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.267707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.267740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.267914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.267947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.268206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.268218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.268378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.268413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.268716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.268751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.269004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.269036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.269224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.269256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.269381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.269414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.269686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.269721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.269868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.269901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.270002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.270013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.270232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.270266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.270560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.270595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.270858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.270891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.271178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.271210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.271409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.271442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.271650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.271683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.271945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.271977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.272280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.272313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.272569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.272602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.272777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.272810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.272938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.272950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.273087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.273118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.273358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.273391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.273618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.273651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.273899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.273937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.274133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.274147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.274218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.274229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.274360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.274371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.274595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.274630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.274817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.274849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.274986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.275018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.275203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.275216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.275376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.275409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.275694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.275729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.276021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.276053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.276304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.276337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.276584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.276619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.276863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.276897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.277041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.277078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.039 qpair failed and we were unable to recover it. 00:26:19.039 [2024-12-13 09:37:31.277230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.039 [2024-12-13 09:37:31.277243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.277397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.277431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.277685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.277718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.277849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.277882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.278131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.278144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.278364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.278376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.278574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.278609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.278793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.278825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.279119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.279152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.279273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.279307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.279510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.279544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.279725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.279757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.280031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.280070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.280229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.280246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.280405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.280422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.280583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.280601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.280694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.280710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.280947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.280980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.281159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.281193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.281380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.281413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.281617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.281651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.281868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.281901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.282086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.282120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.282383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.282416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.282633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.282670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.282950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.282988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.283195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.283228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.283401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.283434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.283710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.283744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.284015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.284051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.284143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.284154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.284280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.284291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.284453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.284465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.284582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.284615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.284812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.284844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.285108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.285120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.285283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.285294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.285533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.285546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.285789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.285801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.285952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.285985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.286164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.286197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.286389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.286422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.286644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.040 [2024-12-13 09:37:31.286677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.040 qpair failed and we were unable to recover it. 00:26:19.040 [2024-12-13 09:37:31.286944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.286977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.287167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.287178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.287321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.287357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.287533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.287568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.287761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.287794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.288043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.288076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.288339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.288373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.288618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.288652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.288828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.288861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.289081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.289121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.289293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.289331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.289552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.289589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.289719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.289752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.289959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.289993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.290259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.290292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.290475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.290510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.290639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.290672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.290844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.290879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.291087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.291120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.291325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.291342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.291571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.291607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.291826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.291859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.292120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.292163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.292433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.292456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.292667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.292684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.292894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.292927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.293164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.293197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.293479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.293515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.293711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.293744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.293985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.294020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.294149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.294167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.294331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.294348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.294563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.294599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.294887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.294920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.295186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.295204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.295375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.295392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.295500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.295517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.295698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.295712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.295800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.295811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.295942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.295954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.296102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.296114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.296271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.296304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.296440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.296483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.296608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.296641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.296787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.296819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.297080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.297112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.297247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.041 [2024-12-13 09:37:31.297280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.041 qpair failed and we were unable to recover it. 00:26:19.041 [2024-12-13 09:37:31.297470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.297505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.297615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.297648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.297849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.297887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.298071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.298105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.298226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.298244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.298380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.298398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.298674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.298709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.298953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.298987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.299177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.299210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.299339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.299372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.299556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.299592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.299791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.299825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.300038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.300079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.300309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.300327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.300426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.300442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.300631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.300670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.300790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.300824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.300947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.300982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.301161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.301194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.301291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.301307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.301499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.301535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.301733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.301766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.301896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.301929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.302066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.302108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.302192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.302208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.302367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.302409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.302628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.302663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.302792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.302825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.303017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.303050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.303305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.303337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.303527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.303563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.303766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.303799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.303978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.304012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.304199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.304216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.304377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.304410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.304612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.304647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.304791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.304824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.305105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.305123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.305351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.305369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.305473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.305490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.305728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.305761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.305955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.305989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.306192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.306230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.306392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.306404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.306487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.306498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.306695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.306729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.306908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.306941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.307120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.042 [2024-12-13 09:37:31.307152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.042 qpair failed and we were unable to recover it. 00:26:19.042 [2024-12-13 09:37:31.307348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.307381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.307518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.307552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.307686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.307719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.307962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.307994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.308186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.308218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.308401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.308435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.308652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.308689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.308877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.308916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.309041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.309075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.309267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.309302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.309375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.309386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.309548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.309562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.309637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.309648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.309880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.309913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.310119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.310152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.310369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.310407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.310676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.310712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.310911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.310944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.311149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.311161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.311361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.311395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.311529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.311564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.311769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.311801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.312048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.312081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.312212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.312244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.312516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.312551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.312795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.312828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.313006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.313019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.313154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.313166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.313303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.313315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.313510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.313543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.313784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.313817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.313977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.313989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.314183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.314195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.314299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.314311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.314469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.314488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.314591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.314609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.314787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.314804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.314888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.314904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.315053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.315070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.315238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.315255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.315415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.315432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.315512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.315529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.315697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.315715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.315873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.315906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.316099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.316134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.316352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.316385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.316638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.316674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.316884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.316924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.317133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-12-13 09:37:31.317165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-12-13 09:37:31.317446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.317469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.317645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.317663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.317877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.317895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.317988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.318004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.318186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.318204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.318288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.318300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.318375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.318385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.318475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.318486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.318656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.318668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.318810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.318843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.318966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.318999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.319125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.319159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.319341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.319373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.319493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.319528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.319654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.319686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.319972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.320004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.320215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.320227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.320375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.320407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.320532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.320565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.320754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.320788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.320968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.321000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.321272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.321304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.321433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.321489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.321666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.321699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.321825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.321858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.321962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.321981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.322137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.322155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.322313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.322330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.322488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.322523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.322709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.322741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.322878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.322912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.323151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.323168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.323257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.323273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.323471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb200f0 is same with the state(6) to be set 00:26:19.044 [2024-12-13 09:37:31.323651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-12-13 09:37:31.323690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-12-13 09:37:31.323790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.323808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.323969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.323983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.324077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.324088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.324181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.324192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.324332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.324344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.324420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.324431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.324571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.324582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.324667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.324677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.324768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.324778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.324862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.324873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.325004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.325015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.325209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.325221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.325302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.325312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.325438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.325463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.325544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.325556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.325702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.325713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.325779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.325789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.325860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.325874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.325949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.325960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.326090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.326100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.326246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.326258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.326418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.326429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.326509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.326521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.326612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.326630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.326767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.326779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.326950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.326962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.327040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.327050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.327188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.327201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.327278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.327289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.327361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.327372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.327497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.327509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.327647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.327659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.327797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.327809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-12-13 09:37:31.327934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-12-13 09:37:31.327946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.352 [2024-12-13 09:37:31.328077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.352 [2024-12-13 09:37:31.328088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.352 qpair failed and we were unable to recover it. 00:26:19.352 [2024-12-13 09:37:31.328279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.352 [2024-12-13 09:37:31.328291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.352 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.328366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.328376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.328515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.328527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.328602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.328612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.328741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.328751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.328957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.328969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.329059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.329070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.329213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.329227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.329364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.329378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.329477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.329489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.329577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.329589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.329665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.329677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.329843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.329855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.329994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.330006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.330088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.330099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.330180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.330191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.330433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.330445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.330583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.330595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.330679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.330690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.330794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.330805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.330890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.330902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.331040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.331064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.331160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.331175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.331257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.331272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.331366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.331389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.331536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.331551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.331635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.331648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.331716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.331728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.331899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.331911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.331975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.331985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.332117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.332129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.332276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.332288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.332353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.332363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.332440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.332456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.332539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.332550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.332699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.332710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.332784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.332794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.332873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.332883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.333021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.353 [2024-12-13 09:37:31.333033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.353 qpair failed and we were unable to recover it. 00:26:19.353 [2024-12-13 09:37:31.333128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.333138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.333286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.333298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.333359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.333369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.333492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.333504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.333590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.333601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.333736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.333747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.333801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.333812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.333953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.333965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.334037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.334048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.334187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.334199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.334280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.334290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.334355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.334365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.334523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.334535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.334672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.334684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.334771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.334781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.334914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.334925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.334988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.334999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.335127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.335140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.335212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.335223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.335315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.335326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.335458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.335470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.335532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.335544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.335670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.335683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.335747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.335762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.335894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.335907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.335976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.335989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.336065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.336076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.336141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.336152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.336229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.336240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.336458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.336471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.336575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.336586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.336766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.336779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.336912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.336925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.337057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.337070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.337147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.337158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.337319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.337332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.337473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.337487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.337578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.337590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.354 [2024-12-13 09:37:31.337756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.354 [2024-12-13 09:37:31.337769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.354 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.337969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.337981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.338069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.338082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.338330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.338343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.338472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.338486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.338685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.338710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.338906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.338918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.339125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.339137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.339331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.339345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.339543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.339556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.339718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.339731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.339865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.339878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.340107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.340146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.340335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.340354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.340599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.340619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.340833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.340853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.341052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.341069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.341233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.341252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.341497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.341512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.341698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.341710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.341943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.341956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.342117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.342129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.342294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.342306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.342461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.342473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.342614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.342626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.342852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.342868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.342932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.342943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.343161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.343174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.343329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.343341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.343562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.343576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.343673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.343685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.343901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.343912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.344055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.344067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.344290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.344303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.344456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.344468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.344541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.344552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.344637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.344648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.344737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.344749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.344916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.344928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.345024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.355 [2024-12-13 09:37:31.345037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.355 qpair failed and we were unable to recover it. 00:26:19.355 [2024-12-13 09:37:31.345193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.345204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.345410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.345423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.345567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.345580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.345787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.345799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.345944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.345956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.346085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.346097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.346239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.346251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.346351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.346363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.346558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.346570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.346644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.346656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.346796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.346814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.346910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.346923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.347064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.347076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.347149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.347159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.347250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.347262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.347354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.347367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.347519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.347532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.347686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.347698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.347789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.347800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.348023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.348035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.348246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.348258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.348427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.348439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.348591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.348603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.348798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.348811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.348898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.348910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.348987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.348999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.349142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.349154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.349377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.349389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.349465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.349475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.349664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.349676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.349838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.349851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.349931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.349942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.350089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.350102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.350328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.350340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.350485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.350497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.350600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.350612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.350754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.350767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.350836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.350846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.350986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.356 [2024-12-13 09:37:31.350998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.356 qpair failed and we were unable to recover it. 00:26:19.356 [2024-12-13 09:37:31.351174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.351186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.351362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.351374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.351526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.351538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.351759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.351771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.351878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.351890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.351963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.351973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.352117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.352129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.352207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.352217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.352379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.352391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.352471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.352483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.352558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.352570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.352702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.352714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.352857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.352871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.352957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.352970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.353229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.353241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.353443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.353459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.353615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.353627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.353827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.353839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.353991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.354005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.354240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.354253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.354394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.354406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.354629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.354641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.354848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.354860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.354990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.355002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.355161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.355172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.355393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.355405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.355549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.355562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.355647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.355660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.355820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.355833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.355916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.355927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.356117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.356129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.356376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.356388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.357 [2024-12-13 09:37:31.356481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.357 [2024-12-13 09:37:31.356493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.357 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.356693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.356705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.356850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.356863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.357004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.357016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.357226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.357237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.357389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.357401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.357560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.357573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.357770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.357781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.357921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.357933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.358133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.358145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.358227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.358238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.358320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.358331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.358409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.358420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.358478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.358490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.358621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.358632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.358764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.358776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.358917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.358930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.359078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.359090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.359306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.359317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.359513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.359526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.359751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.359785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.360043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.360081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.360206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.360240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.360437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.360453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.360595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.360607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.360742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.360754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.361029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.361061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.361304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.361338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.361587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.361620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.361918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.361951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.362082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.362115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.362317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.362330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.362481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.362515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.362782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.362816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.362990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.363022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.363204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.363239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.363495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.363530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.363652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.363685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.363888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.363920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.364177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.358 [2024-12-13 09:37:31.364210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.358 qpair failed and we were unable to recover it. 00:26:19.358 [2024-12-13 09:37:31.364405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.364438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.364689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.364723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.364917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.364951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.365152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.365185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.365365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.365397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.365701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.365734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.366002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.366035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.366312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.366323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.366414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.366425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.366642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.366656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.366839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.366873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.367071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.367103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.367345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.367377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.367648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.367662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.367805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.367817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.367981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.368014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.368139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.368174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.368441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.368484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.368778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.368813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.369086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.369119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.369333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.369365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.369608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.369623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.369731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.369776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.369993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.370026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.370161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.370194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.370401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.370413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.370608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.370621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.370804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.370837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.371044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.371077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.371208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.371242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.371512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.371546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.371725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.371758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.371943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.371975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.372158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.372170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.372357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.372390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.372595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.372630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.372762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.372796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.372976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.373009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.373236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.359 [2024-12-13 09:37:31.373248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.359 qpair failed and we were unable to recover it. 00:26:19.359 [2024-12-13 09:37:31.373385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.373398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.373613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.373647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.373914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.373947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.374152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.374186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.374378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.374410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.374656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.374690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.374936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.374969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.375193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.375225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.375417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.375483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.375695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.375728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.375976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.376007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.376254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.376267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.376462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.376474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.376615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.376627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.376704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.376716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.376790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.376800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.376874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.376884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.377021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.377031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.377103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.377114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.377335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.377348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.377550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.377585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.377761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.377793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.378037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.378078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.378249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.378261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.378465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.378499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.378682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.378715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.378906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.378939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.379189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.379223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.379434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.379496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.379736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.379748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.379896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.379929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.380138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.380173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.380369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.380401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.380616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.380650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.380894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.380928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.381188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.381222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.381519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.381553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.381748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.381783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.381907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.360 [2024-12-13 09:37:31.381940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.360 qpair failed and we were unable to recover it. 00:26:19.360 [2024-12-13 09:37:31.382209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.382242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.382446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.382488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.382651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.382663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.382752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.382762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.382988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.383000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.383201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.383234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.383528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.383563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.383742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.383775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.384025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.384058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.384303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.384316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.384541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.384554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.384777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.384789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.384956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.384969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.385068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.385101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.385354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.385390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.385677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.385712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.385983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.386016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.386195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.386230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.386458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.386493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.386683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.386696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.386898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.386932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.387200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.387234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.387478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.387514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.387643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.387683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.387954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.387988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.388250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.388263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.388405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.388417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.388501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.388513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.388674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.388686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.388907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.388941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.361 qpair failed and we were unable to recover it. 00:26:19.361 [2024-12-13 09:37:31.389138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.361 [2024-12-13 09:37:31.389172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.389363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.389397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.389650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.389685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.389898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.389932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.390120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.390153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.390335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.390369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.390565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.390599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.390860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.390894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.391105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.391138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.391314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.391349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.391566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.391579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.391797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.391830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.392025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.392059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.392325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.392359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.392645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.392680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.392951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.392986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.393252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.393285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.393569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.393604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.393878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.393911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.394126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.394160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.394435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.394480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.394722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.394735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.394895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.394907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.395077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.395089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.395264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.395298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.395587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.395622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.395739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.395773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.396017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.396050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.396288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.396299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.396443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.396460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.396627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.396660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.396921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.396953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.397203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.397236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.397328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.397341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.397494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.397506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.397655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.397668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.397879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.397891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.398059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.398091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.362 qpair failed and we were unable to recover it. 00:26:19.362 [2024-12-13 09:37:31.398336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.362 [2024-12-13 09:37:31.398369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.398569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.398603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.398784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.398817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.399030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.399062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.399177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.399210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.399414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.399447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.399721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.399754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.400031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.400063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.400190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.400224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.400528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.400542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.400631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.400641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.400868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.400902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.401159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.401193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.401395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.401408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.401553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.401588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.401850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.401885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.402097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.402131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.402365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.402399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.402652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.402687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.402989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.403023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.403197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.403209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.403353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.403386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.403635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.403670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.403940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.403973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.404173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.404186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.404428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.404440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.404664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.404676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.404845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.404857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.405059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.405090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.405305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.405338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.405609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.405645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.405830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.405863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.405986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.406037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.406301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.406335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.406606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.406634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.406762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.406776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.406865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.406876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.407007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.407019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.407151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.407163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.363 qpair failed and we were unable to recover it. 00:26:19.363 [2024-12-13 09:37:31.407241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.363 [2024-12-13 09:37:31.407252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.407475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.407509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.407751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.407784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.407962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.407994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.408236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.408269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.408493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.408505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.408671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.408703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.408902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.408935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.409241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.409275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.409528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.409563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.409865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.409878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.410006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.410018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.410162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.410174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.410413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.410445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.410690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.410724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.410928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.410961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.411148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.411181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.411389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.411402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.411610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.411645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.411908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.411941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.412153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.412186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.412464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.412498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.412758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.412793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.412923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.412956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.413150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.413183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.413385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.413419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.413656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.413668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.413896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.413908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.414041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.414054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.414206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.414218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.414361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.414374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.414574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.414587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.414719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.414732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.414916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.414928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.415126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.415139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.415285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.415297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.415389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.415402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.415652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.415664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.415893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.415906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.364 [2024-12-13 09:37:31.416037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.364 [2024-12-13 09:37:31.416049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.364 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.416148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.416158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.416233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.416245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.416380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.416392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.416561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.416574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.416724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.416737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.416911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.416925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.417071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.417084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.417166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.417177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.417324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.417337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.417502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.417515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.417694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.417707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.417845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.417856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.418080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.418092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.418323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.418335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.418579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.418592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.418738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.418750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.418994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.419007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.419155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.419167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.419376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.419388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.419583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.419596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.419682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.419693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.419913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.419925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.420083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.420095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.420244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.420256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.420492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.420506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.420711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.420723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.420926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.420938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.421135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.421147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.421301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.421313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.421462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.421474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.421616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.421629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.421852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.421866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.421943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.421954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.422221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.422234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.422318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.422330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.422496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.422510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.422712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.422739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.422937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.422949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.365 [2024-12-13 09:37:31.423081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.365 [2024-12-13 09:37:31.423094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.365 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.423188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.423199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.423340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.423353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.423445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.423469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.423625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.423638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.423706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.423718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.423866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.423879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.424009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.424021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.424248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.424260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.424391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.424403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.424639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.424651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.424788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.424800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.424956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.424969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.425116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.425128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.425354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.425366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.425548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.425562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.425708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.425721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.425813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.425823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.425893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.425904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.426099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.426112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.426329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.426342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.426501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.426513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.426668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.426681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.426903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.426917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.427044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.427057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.427140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.427152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.427289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.427301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.427532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.427545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.427765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.427777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.427927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.427940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.428021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.428033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.428173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.428186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.428397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.428409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.428560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.428572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.428636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.428647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.366 qpair failed and we were unable to recover it. 00:26:19.366 [2024-12-13 09:37:31.428923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.366 [2024-12-13 09:37:31.428935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.429086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.429099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.429187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.429199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.429328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.429344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.429477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.429490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.429689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.429701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.429841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.429854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.429997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.430029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.430156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.430188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.430440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.430458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.430531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.430543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.430675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.430687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.430849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.430862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.431066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.431078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.431135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.431146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.431373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.431406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.431650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.431684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.431880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.431913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.432101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.432134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.432410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.432444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.432593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.432627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.432922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.432956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.433137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.433170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.433399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.433434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.433620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.433632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.433778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.433790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.433938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.433951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.434129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.434162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.434435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.434482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.434694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.434728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.434931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.434966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.435231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.435264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.435501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.435535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.435757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.435791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.435971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.436005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.436184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.436217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.436469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.436482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.436703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.436715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.436919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.436953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.367 [2024-12-13 09:37:31.437138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.367 [2024-12-13 09:37:31.437150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.367 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.437281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.437295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.437492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.437505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.437591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.437602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.437789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.437827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.438095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.438129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.438403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.438437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.438668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.438702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.439000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.439034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.439317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.439349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.439547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.439583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.439824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.439836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.440048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.440061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.440162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.440193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.440381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.440415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.440691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.440728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.440907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.440939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.441187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.441220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.441460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.441496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.441634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.441647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.441726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.441736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.441844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.441855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.441953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.441963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.442045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.442057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.442192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.442227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.442415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.442458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.442648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.442681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.442862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.442896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.443076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.443109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.443235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.443268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.443542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.443578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.443765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.443778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.443924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.443936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.444080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.444092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.444202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.444215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.444297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.444309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.444443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.444462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.444549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.444560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.444647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.368 [2024-12-13 09:37:31.444658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.368 qpair failed and we were unable to recover it. 00:26:19.368 [2024-12-13 09:37:31.444800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.444812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.444905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.444916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.445061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.445074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.445130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.445143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.445294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.445305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.445395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.445407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.445612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.445648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.445786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.445820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.445938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.445974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.446282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.446316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.446503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.446537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.446714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.446748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.446947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.446981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.447153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.447187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.447320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.447352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.447501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.447514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.447656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.447670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.447802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.447813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.447942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.447976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.448160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.448194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.448302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.448335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.448533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.448568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.448713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.448746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.448925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.448958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.449082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.449115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.449237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.449272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.449462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.449496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.449669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.449683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.449887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.449920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.450211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.450244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.450516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.450551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.450726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.450759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.450934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.451009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.451155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.451193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.451400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.451435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.451650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.451669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.451830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.451869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.452062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.452096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.452238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.369 [2024-12-13 09:37:31.452270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.369 qpair failed and we were unable to recover it. 00:26:19.369 [2024-12-13 09:37:31.452470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.452506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.452697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.452731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.452930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.452964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.453077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.453111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.453251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.453284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.453406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.453440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.453659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.453694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.453833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.453868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.454042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.454077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.454315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.454349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.454597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.454616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.454792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.454806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.454952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.454986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.455279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.455312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.455495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.455530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.455635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.455649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.455790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.455802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.455891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.455903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.455979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.455990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.456070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.456081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.456154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.456168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.456257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.456282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.456466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.456502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.456744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.456778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.456899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.456934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.457108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.457142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.457262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.457294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.457467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.457481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.457634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.457667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.457781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.457814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.458035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.458068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.458197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.458230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.458353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.458387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.458574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.458609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.458817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.370 [2024-12-13 09:37:31.458851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.370 qpair failed and we were unable to recover it. 00:26:19.370 [2024-12-13 09:37:31.459102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.459137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.459329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.459362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.459554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.459568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.459711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.459723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.459796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.459807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.459876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.459888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.460042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.460077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.460274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.460307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.460491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.460525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.460667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.460679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.460769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.460780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.460862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.460873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.461086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.461119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.461298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.461332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.461586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.461598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.461768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.461782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.462011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.462023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.462186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.462218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.462329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.462361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.462594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.462629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.462816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.462848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.463034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.463066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.463255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.463289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.463467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.463480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.463620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.463659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.463797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.463836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.464100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.464132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.464343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.464377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.464559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.464592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.464765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.464777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.464949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.464982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.465170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.465203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.465307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.465339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.465590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.465603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.465786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.465804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.465934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.465946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.466120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.466154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.466421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.466461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.466654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.371 [2024-12-13 09:37:31.466687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.371 qpair failed and we were unable to recover it. 00:26:19.371 [2024-12-13 09:37:31.466957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.466991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.467108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.467141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.467390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.467423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.467626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.467666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.467936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.467970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.468101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.468135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.468373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.468391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.468562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.468580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.468743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.468778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.468994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.469027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.469213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.469245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.469483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.469502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.469602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.469620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.469737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.469778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.470025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.470044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.470199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.470213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.470365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.470398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.470545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.470581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.470730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.470763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.470943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.470976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.471240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.471273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.471487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.471522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.471818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.471851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.472056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.472089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.472216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.472229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.472465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.472478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.472613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.472627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.472776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.472809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.472990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.473023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.473291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.473324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.473574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.473608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.473787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.473820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.474091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.474125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.474325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.474359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.474641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.474653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.474872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.474885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.475060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.475072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.475286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.372 [2024-12-13 09:37:31.475298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.372 qpair failed and we were unable to recover it. 00:26:19.372 [2024-12-13 09:37:31.475562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.475597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.475792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.475825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.476010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.476043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.476306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.476339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.476627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.476662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.476934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.476967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.477150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.477183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.477415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.477428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.477560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.477572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.477781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.477814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.478059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.478092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.478289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.478323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.478535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.478570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.478757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.478790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.478977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.479010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.479280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.479314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.479596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.479633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.479822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.479855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.479987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.480020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.480287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.480321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.480511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.480546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.480796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.480808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.481009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.481041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.481304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.481338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.481555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.481590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.481784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.481796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.482013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.482046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.482315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.482349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.482551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.482590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.482738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.482750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.482983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.482996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.483099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.483111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.483254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.483266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.483489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.483524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.483731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.483765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.483997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.484030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.484229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.484263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.484499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.484513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.484656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.373 [2024-12-13 09:37:31.484689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.373 qpair failed and we were unable to recover it. 00:26:19.373 [2024-12-13 09:37:31.484963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.484995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.485206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.485239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.485436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.485480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.485678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.485712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.485964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.485998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.486241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.486275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.486501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.486536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.486723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.486757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.487047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.487080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.487260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.487293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.487541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.487576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.487839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.487872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.487993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.488026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.488218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.488252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.488388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.488421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.488684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.488696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.488916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.488928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.489000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.489011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.489227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.489239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.489381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.489394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.489622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.489635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.489852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.489865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.490087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.490099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.490251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.490263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.490341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.490352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.490523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.490535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.490673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.490707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.490948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.490981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.491211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.491244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.491491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.491532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.491745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.491778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.492064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.492096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.492376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.492410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.492694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.492728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.492990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.493024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.493318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.493352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.493569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.493583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.493759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.493792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.493972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.494006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.374 qpair failed and we were unable to recover it. 00:26:19.374 [2024-12-13 09:37:31.494147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.374 [2024-12-13 09:37:31.494179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.494353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.494387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.494642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.494655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.494812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.494845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.495123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.495156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.495335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.495368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.495537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.495550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.495800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.495833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.496025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.496057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.496257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.496289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.496467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.496480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.496683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.496716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.496968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.497002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.497180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.497214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.497429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.497474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.497679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.497712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.497977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.498011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.498236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.498269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.498539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.498574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.498801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.498835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.499032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.499065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.499250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.499284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.499397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.499430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.499699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.499734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.499928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.499960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.500226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.500260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.500465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.500478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.500699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.500711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.500959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.500992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.501213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.501246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.501439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.501482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.501748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.501761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.501835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.501866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.502111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.502145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.502418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.375 [2024-12-13 09:37:31.502459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.375 qpair failed and we were unable to recover it. 00:26:19.375 [2024-12-13 09:37:31.502604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.502616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.502751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.502763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.502989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.503001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.503144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.503157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.503287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.503299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.503503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.503515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.503717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.503728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.503928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.503940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.504158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.504170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.504342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.504354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.504431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.504442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.504643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.504655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.504822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.504834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.504992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.505024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.505210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.505244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.505428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.505472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.505623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.505636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.505768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.505780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.505948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.505960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.506183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.506217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.506472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.506506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.506695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.506707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.506865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.506909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.507107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.507140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.507346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.507379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.507590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.507603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.507831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.507843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.508006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.508039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.508219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.508253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.508499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.508533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.508726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.508761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.508956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.508990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.509193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.509226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.509476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.509489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.509571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.509582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.509809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.509842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.510133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.510167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.510436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.510480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.510633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.510666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.376 qpair failed and we were unable to recover it. 00:26:19.376 [2024-12-13 09:37:31.510864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.376 [2024-12-13 09:37:31.510898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.511099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.511133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.511310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.511342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.511596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.511631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.511923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.511957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.512099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.512132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.512376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.512409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.512703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.512716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.512895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.512907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.512999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.513029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.513246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.513279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.513404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.513438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.513712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.513725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.513927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.513939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.514170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.514182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.514379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.514391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.514613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.514626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.514837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.514870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.514981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.515013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.515200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.515234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.515495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.515531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.515727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.515760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.515929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.515941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.516097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.516136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.516410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.516442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.516650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.516687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.516969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.517002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.517257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.517291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.517557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.517591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.517787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.517820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.518018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.518050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.518314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.518348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.518586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.518630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.518849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.518862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.519065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.519078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.519323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.519336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.519481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.519494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.519706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.519740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.519921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.377 [2024-12-13 09:37:31.519954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.377 qpair failed and we were unable to recover it. 00:26:19.377 [2024-12-13 09:37:31.520163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.520197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.520388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.520400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.520672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.520684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.520847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.520860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.521025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.521059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.521320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.521355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.521536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.521573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.521642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.521653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.521866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.521899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.522096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.522130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.522321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.522354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.522586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.522599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.522769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.522781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.523052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.523084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.523279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.523313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.523605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.523641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.523907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.523940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.524174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.524208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.524471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.524506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.524606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.524617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.524762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.524791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.524976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.525010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.525208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.525241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.525442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.525498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.525761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.525776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.525951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.525983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.526251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.526285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.526401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.526434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.526590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.526603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.526818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.526830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.527054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.527087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.527298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.527331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.527579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.527614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.527934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.527969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.528159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.528192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.528400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.528412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.528562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.528595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.528865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.528898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.529088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.378 [2024-12-13 09:37:31.529121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.378 qpair failed and we were unable to recover it. 00:26:19.378 [2024-12-13 09:37:31.529344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.529378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.529640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.529653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.529812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.529846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.529959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.529990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.530169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.530203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.530402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.530434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.530728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.530740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.530937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.530950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.531124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.531137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.531288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.531321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.531568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.531581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.531810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.531843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.532045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.532079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.532325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.532357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.532569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.532604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.532866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.532900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.533080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.533114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.533253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.533286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.533549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.533583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.533762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.533797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.533926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.533938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.534169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.534181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.534318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.534330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.534505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.534539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.534759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.534793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.534989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.535027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.535290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.535324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.535486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.535522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.535766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.535799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.535921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.535954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.536222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.536256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.536533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.536568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.536837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.536849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.537022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.537034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.537197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.379 [2024-12-13 09:37:31.537230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.379 qpair failed and we were unable to recover it. 00:26:19.379 [2024-12-13 09:37:31.537351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.537384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.537496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.537529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.537737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.537780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.538005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.538017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.538165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.538177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.538337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.538370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.538656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.538691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.538909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.538942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.539066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.539100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.539370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.539404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.539682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.539694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.539902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.539914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.540137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.540150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.540321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.540333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.540496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.540509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.540760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.540793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.541087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.541120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.541396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.541430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.541715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.541749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.542037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.542070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.542342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.542375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.542662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.542697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.542943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.542976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.543179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.543211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.543483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.543517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.543708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.543742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.543879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.543914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.544129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.544162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.544419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.544465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.544661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.544674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.544752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.544765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.544913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.544924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.545153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.545186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.545382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.545415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.545554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.545589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.545834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.545868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.546158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.546191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.546469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.546515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.380 [2024-12-13 09:37:31.546699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.380 [2024-12-13 09:37:31.546712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.380 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.546946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.546978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.547155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.547189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.547378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.547411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.547698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.547732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.548022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.548055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.548201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.548233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.548423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.548468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.548782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.548815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.549012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.549045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.549176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.549209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.549472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.549484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.549683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.549695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.549831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.549843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.550002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.550035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.550309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.550342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.550551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.550564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.550724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.550737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.550978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.550990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.551082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.551093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.551343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.551376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.551668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.551703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.551877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.551889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.552104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.552136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.552433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.552476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.552762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.552795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.553083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.553116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.553290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.553324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.553521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.553555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.553828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.553861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.554039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.554072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.554319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.554352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.554599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.554644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.554890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.554924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.555161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.555174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.555421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.555433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.555599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.555612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.555756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.555768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.555853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.555864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.556007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.381 [2024-12-13 09:37:31.556020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.381 qpair failed and we were unable to recover it. 00:26:19.381 [2024-12-13 09:37:31.556086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.556097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.556321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.556333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.556530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.556542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.556682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.556694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.556900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.556934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.557126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.557159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.557469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.557504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.557766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.557777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.557976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.557988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.558147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.558179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.558378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.558411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.558636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.558649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.558858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.558891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.559158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.559192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.559487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.559522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.559788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.559821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.560113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.560146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.560345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.560378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.560572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.560607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.560898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.560911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.561142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.561155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.561297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.561309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.561410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.561421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.561567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.561580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.561828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.561862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.562191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.562225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.562508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.562544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.562689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.562722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.562997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.563009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.563160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.563173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.563395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.563407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.563631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.563644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.563811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.382 [2024-12-13 09:37:31.563850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.382 qpair failed and we were unable to recover it. 00:26:19.382 [2024-12-13 09:37:31.564144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.564176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.564300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.564334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.564612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.564647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.564850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.564883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.565151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.565184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.565360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.565394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.565577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.565590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.565718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.565730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.565881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.565894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.566035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.566068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.566336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.566370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.566640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.566652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.566832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.566845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.566993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.567005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.567190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.567202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.567265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.567276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.567504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.567538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.567755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.567789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.568013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.568026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.568196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.568208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.568353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.568391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.568746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.568780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.569065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.569099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.569376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.569409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.569632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.569644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.569872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.569904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.570109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.570142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.570405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.570438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.570727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.570760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.571030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.571042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.571141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.571151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.571372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.571405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.571687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.571722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.571993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.572026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.572313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.572346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.572564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.572600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.572815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.572828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.573028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.573040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.573186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.383 [2024-12-13 09:37:31.573198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.383 qpair failed and we were unable to recover it. 00:26:19.383 [2024-12-13 09:37:31.573445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.573463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.573619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.573631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.573833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.573846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.573926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.573937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.574099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.574111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.574253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.574286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.574484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.574519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.574733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.574767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.574951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.574963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.575115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.575128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.575273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.575285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.575534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.575585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.575790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.575824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.576036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.576069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.576192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.576226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.576348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.576381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.576637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.576672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.576965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.576999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.577204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.577237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.577355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.577399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.577627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.577641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.577847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.577859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.578017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.578050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.578248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.578283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.578552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.578588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.578799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.578812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.578918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.578949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.579151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.579185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.579439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.579481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.579771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.579783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.579914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.579927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.580153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.580186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.580378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.580412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.580622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.580657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.580855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.580888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.581187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.581221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.581484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.581520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.581729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.581763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.581964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.384 [2024-12-13 09:37:31.581977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.384 qpair failed and we were unable to recover it. 00:26:19.384 [2024-12-13 09:37:31.582154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.582187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.582469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.582511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.582623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.582656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.582921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.582935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.583088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.583113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.583350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.583379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.583631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.583666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.583938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.583971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.584249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.584283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.584525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.584560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.584745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.584775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.585071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.585103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.585374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.585407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.585646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.585681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.585934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.585967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.586178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.586212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.586403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.586438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.586706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.586718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.586884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.586897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.587139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.587152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.587375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.587388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.587614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.587629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.587770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.587784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.588019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.588052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.588358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.588391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.588656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.588692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.588900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.588913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.589048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.589062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.589308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.589343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.589614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.589649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.589943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.589956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.590266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.590300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.590496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.590531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.590752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.590797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.590944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.590957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.591164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.591177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.591367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.591400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.591635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.591671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.591878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.385 [2024-12-13 09:37:31.591911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.385 qpair failed and we were unable to recover it. 00:26:19.385 [2024-12-13 09:37:31.592179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.592192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.592395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.592408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.592698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.592713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.592947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.592980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.593214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.593248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.593460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.593496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.593674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.593687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.593909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.593943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.594134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.594168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.594311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.594344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.594620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.594655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.594862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.594896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.595046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.595080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.595265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.595298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.595578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.595616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.595871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.595883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.595974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.595986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.596217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.596253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.596463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.596499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.596759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.596793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.596943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.596977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.597238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.597271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.597409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.597442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.597710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.597745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.597937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.597972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.598176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.598189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.598299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.598312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.598409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.598421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.598625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.598639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.598784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.598797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.598955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.598968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.599125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.599139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.599211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.599222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.599447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.599468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.599614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.599627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.599792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.599805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.599967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.600002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.600275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.600307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.600601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.386 [2024-12-13 09:37:31.600637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.386 qpair failed and we were unable to recover it. 00:26:19.386 [2024-12-13 09:37:31.600858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.600893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.601082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.601117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.601324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.601358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.601576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.601622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.601824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.601858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.602112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.602127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.602345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.602378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.602578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.602613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.602747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.602782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.603027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.603041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.603245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.603258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.603395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.603409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.603572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.603619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.603752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.603786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.604089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.604124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.604404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.604439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.604717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.604731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.605006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.605041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.605360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.605394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.605692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.605726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.605998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.606031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.606164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.606197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.606401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.606434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.606720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.606754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.606906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.606939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.607158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.607171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.607392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.607424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.607643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.607678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.607896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.607929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.608210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.608245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.608532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.608569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.608700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.608735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.608994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.609007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.609181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.609193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.387 [2024-12-13 09:37:31.609390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.387 [2024-12-13 09:37:31.609424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.387 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.609689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.609704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.609921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.609959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.610235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.610271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.610494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.610530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.610723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.610736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.610895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.610943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.611214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.611249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.611486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.611521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.611841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.611883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.612081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.612095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.612347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.612359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.612598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.612632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.612820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.612849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.613116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.613146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.613368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.613399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.613560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.613591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.613876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.613886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.614044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.614054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.614276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.614286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.614362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.614371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.614598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.614609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.614759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.614769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.614937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.614969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.615243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.615275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.615511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.615543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.615794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.615827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.616013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.616025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.616278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.616288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.616387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.616399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.616484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.616497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.616728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.616741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.616921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.616952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.617160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.617192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.617397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.617428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.617631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.617663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.617971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.618048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.618336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.618371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.618544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.388 [2024-12-13 09:37:31.618579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.388 qpair failed and we were unable to recover it. 00:26:19.388 [2024-12-13 09:37:31.618741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.618779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.618925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.618959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.619245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.619269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.619541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.619577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.619739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.619776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.619972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.619993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.620177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.620213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.620484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.620523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.620771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.620809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.621040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.621059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.621221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.621240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.621508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.621547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.621775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.621811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.622019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.622053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.622383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.622417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.622652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.622688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.622973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.623008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.623230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.623250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.623488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.623507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.623734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.623753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.623866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.623885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.624042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.624060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.624216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.624236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.624486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.624524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.624725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.624768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.625001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.625035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.625317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.625353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.625650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.625687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.625946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.625993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.626185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.626205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.626388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.626407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.626651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.626670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.626901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.626921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.627082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.627102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.627208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.627247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.627536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.627571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.627800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.627836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.628110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.628147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.628364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.628399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.389 [2024-12-13 09:37:31.628690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.389 [2024-12-13 09:37:31.628728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.389 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.628955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.628988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.629243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.629261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.629456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.629477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.629627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.629645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.629838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.629873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.630089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.630124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.630321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.630355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.630583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.630621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.630916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.630935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.631036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.631053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.631311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.631330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.631499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.631519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.631773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.631808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.632015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.632050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.632290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.632324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.632553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.632590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.632833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.632851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.633125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.633160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.633295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.633330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.633551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.633587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.633898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.633934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.634215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.634250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.634532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.634570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.634749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.634768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.634949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.634983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.635303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.635344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.635555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.635591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.635795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.635830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.636028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.636046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.636236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.636271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.636472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.636508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.636716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.636752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.636951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.636985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.637268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.637304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.637494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.637531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.637735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.637769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.637918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.637953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.638116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.638150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.390 qpair failed and we were unable to recover it. 00:26:19.390 [2024-12-13 09:37:31.638328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.390 [2024-12-13 09:37:31.638347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.638526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.638546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.638725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.638759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.639020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.639055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.639187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.639222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.639441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.639490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.639703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.639737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.639970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.640005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.640192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.640212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.640390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.640411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.640663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.640683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.640966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.640986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.641216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.641251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.641375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.641409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.641640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.641682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.641994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.642028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.642259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.642293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.642514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.642551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.642738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.642772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.642976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.642996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.643234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.643252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.643507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.643526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.643762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.643796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.644025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.644058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.644260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.644278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.644545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.644581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.644783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.644816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.645097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.645132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.645418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.645437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.645609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.645629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.645859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.645895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.646100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.646136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.646269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.646304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.646515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.646553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.646845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.646879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.391 qpair failed and we were unable to recover it. 00:26:19.391 [2024-12-13 09:37:31.647155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.391 [2024-12-13 09:37:31.647176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.647328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.647347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.647510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.647531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.647762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.647779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.647968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.648001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.648212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.648246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.648501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.648539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.648752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.648788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.649006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.649024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.649187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.649207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.649369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.649388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.649599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.649635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.649986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.650020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.650240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.650258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.650376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.650394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.650562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.650582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.650828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.650862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.651144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.651180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.651469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.651504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.651696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.651733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.651941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.651982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.652198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.652218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.652395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.652414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.652682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.652720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.652848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.652885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.653169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.653203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.653432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.653487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.653682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.653702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.653927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.653961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.654167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.654202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.654336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.654370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.654656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.654693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.654968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.655003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.655202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.655243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.655511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.655546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.655842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.655875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.656090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.656126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.656337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.656372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.656582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.656617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.656765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.392 [2024-12-13 09:37:31.656799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.392 qpair failed and we were unable to recover it. 00:26:19.392 [2024-12-13 09:37:31.656992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.657035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.657280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.657299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.657474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.657494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.657726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.657759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.658005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.658023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.658196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.658213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.658332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.658366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.658602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.658645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.658953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.659000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.659188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.659207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.659316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.659349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.659488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.659524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.659713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.659746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.659887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.659924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.660147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.660169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.660415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.660434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.660671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.660692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.660803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.660822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.661027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.661062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.661190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.661223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.661391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.661426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.661701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.661736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.661969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.661987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.662237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.662256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.662431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.662453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.662552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.662569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.662723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.662742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.662992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.663010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.663186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.663204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.663296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.663313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.663556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.663577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.663751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.663769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.663935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.663953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.664127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.664159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.664362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.664396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.664635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.664673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.664821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.664855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.665124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.665158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.665421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.665464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.393 [2024-12-13 09:37:31.665677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.393 [2024-12-13 09:37:31.665711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.393 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.665832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.665866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.666145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.666179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.666484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.666520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.666767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.666801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.666949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.666968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.667155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.667189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.667460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.667498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.667765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.667800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.668023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.668063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.668200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.668220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.668471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.668509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.668717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.668738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.668910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.668943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.669135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.669170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.669372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.669407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.669685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.669722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.670028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.670061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.670316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.670334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.670565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.670586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.670688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.670704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.670950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.670985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.671121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.671153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.671468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.671504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.671709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.671743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.671953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.671988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.672182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.672202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.672355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.672376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.672671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.672707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.672975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.673009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.673307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.673332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.673563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.673587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.673685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.673704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.673836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.673856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.674100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.674120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.674240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.674262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.674528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.674563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.674772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.674799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.674997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.675024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.675196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.675222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.394 qpair failed and we were unable to recover it. 00:26:19.394 [2024-12-13 09:37:31.675423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.394 [2024-12-13 09:37:31.675454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.675646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.675672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.675848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.675871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.676063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.676088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.676245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.676269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.676511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.676544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.676792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.676814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.676915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.676942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.677243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.677273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.677521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.677544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.677871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.677920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.678147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.678168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.678439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.678469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.678721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.678740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.678932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.678952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.679149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.679167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.679327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.679345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.679519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.679541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.679729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.679748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.679933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.679952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.680196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.680215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.680477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.680496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.680669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.680688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.680791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.680814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.680918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.680935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.681115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.681134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.681298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.681317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.681501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.681520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.681687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.681705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.681876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.681894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.682066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.682085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.682310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.682329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.682428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.682445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.682679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.682699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.682896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.682916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.683089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.683110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.683270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.683289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.683460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.683480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.683649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.683670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.395 [2024-12-13 09:37:31.683761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.395 [2024-12-13 09:37:31.683778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.395 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.683931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.683950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.684143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.684162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.684354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.684375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.684477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.684496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.684588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.684604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.684849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.684869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.685116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.685135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.685353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.685373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.685524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.685545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.685719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.685739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.685950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.685992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.686100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.686117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.686352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.686366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.686605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.686621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.686791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.686805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.686951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.686965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.687185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.687200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.687356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.687370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.687592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.687606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.687829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.687844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.687947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.687958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.688197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.688210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.688349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.688362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.688461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.688478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.688710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.688723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.688884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.688898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.689049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.689062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.689227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.689240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.689456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.689471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.689635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.689648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.689886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.689899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.690052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.690067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.690308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.690321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.690562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.690577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.396 [2024-12-13 09:37:31.690730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.396 [2024-12-13 09:37:31.690744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.396 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.690900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.690913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.691077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.691092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.691258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.691272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.691501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.691516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.691685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.691719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.691920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.691955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.692236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.692270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.692403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.692437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.692593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.692627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.692875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.692890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.693105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.693118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.693268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.693282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.693441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.693485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.693676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.693710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.693938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.693952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.694106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.694140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.694347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.694383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.694508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.694543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.694674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.694709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.695024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.695059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.695269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.695305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.695559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.695595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.695795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.695829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.696138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.696173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.696308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.696342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.696473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.696508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.696777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.696813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.697000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.697034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.697328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.697363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.697597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.697632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.697762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.697796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.698061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.698075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.698310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.698323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.698435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.698455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.698649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.698662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.698751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.698762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.698903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.698918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.397 [2024-12-13 09:37:31.699089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.397 [2024-12-13 09:37:31.699102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.397 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.699334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.699349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.699511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.699526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.699696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.699710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.699907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.699922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.700011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.700023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.700234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.700247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.700401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.700415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.700511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.700523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.700667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.700681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.700892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.700906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.701149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.701163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.701246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.701258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.701400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.701412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.701660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.701674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.701813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.701827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.701928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.701939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.702174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.702187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.702318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.702333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.702565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.702579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.702738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.702752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.702906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.702919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.703024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.703037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.703327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.703361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.703622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.703657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.703868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.703881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.704030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.704043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.704308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.704342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.704475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.704513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.704715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.704750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.678 qpair failed and we were unable to recover it. 00:26:19.678 [2024-12-13 09:37:31.705009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.678 [2024-12-13 09:37:31.705022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.705182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.705196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.705273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.705285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.705552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.705587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.705819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.705854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.706049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.706063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.706222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.706236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.706334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.706346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.706485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.706499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.706656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.706669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.706811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.706824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.707056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.707071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.707224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.707237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.707383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.707396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.707655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.707669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.707826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.707842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.708003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.708017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.708184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.708197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.708482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.708497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.708669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.708684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.708904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.708918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.709078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.709092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.709180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.709193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.709289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.709302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.709481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.709496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.709650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.709664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.709870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.709886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.710123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.710139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.710370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.710386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.710547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.710562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.710790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.710804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.710950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.710964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.711167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.711180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.711353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.711367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.711582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.711598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.711788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.711801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.711882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.711894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.712122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.712135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.712364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.712379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.679 [2024-12-13 09:37:31.712621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.679 [2024-12-13 09:37:31.712635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.679 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.712802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.712815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.712979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.712994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.713164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.713177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.713410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.713424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.713672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.713687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.713849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.713862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.714011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.714026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.714179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.714193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.714403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.714416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.714675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.714690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.714901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.714916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.715067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.715081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.715240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.715254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.715457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.715472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.715726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.715740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.715905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.715932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.716014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.716026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.716125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.716136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.716285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.716298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.716393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.716405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.716560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.716575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.716735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.716749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.716841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.716852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.717046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.717060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.717242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.717255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.717460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.717474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.717665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.717680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.717913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.717927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.718145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.718160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.718303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.718317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.718401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.718413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.718495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.718509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.718744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.718760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.718899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.718912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.719014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.719028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.719171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.719184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.719342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.719355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.719533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.719548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.680 [2024-12-13 09:37:31.719760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.680 [2024-12-13 09:37:31.719773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.680 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.719981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.719996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.720151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.720164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.720311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.720324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.720483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.720498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.720651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.720664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.720764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.720778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.720942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.720955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.721165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.721180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.721416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.721430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.721587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.721603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.721699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.721711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.721945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.721959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.722111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.722126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.722405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.722419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.722714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.722729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.722823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.722837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.722938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.722953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.723165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.723178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.723342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.723357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.723465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.723477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.723621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.723648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.723909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.723923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.724114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.724128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.724337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.724350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.724500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.724514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.724748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.724764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.724997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.725011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.725195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.725208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.725382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.725396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.725611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.725628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.725902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.725916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.726127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.726140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.726288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.726301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.726546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.726562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.726755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.726768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.726922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.726936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.727143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.727157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.727365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.727379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.681 [2024-12-13 09:37:31.727586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.681 [2024-12-13 09:37:31.727602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.681 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.727687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.727701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.727799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.727811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.727954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.727968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.728061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.728072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.728256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.728270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.728417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.728430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.728577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.728590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.728801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.728813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.729052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.729066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.729220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.729254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.729397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.729410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.729599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.729614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.729772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.729785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.730018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.730031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.730109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.730121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.730350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.730363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.730534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.730547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.730769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.730796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.730960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.730974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.731188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.731201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.731365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.731379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.731611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.731627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.731876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.731889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.732042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.732055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.732262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.732276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.732429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.732443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.732536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.732549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.732626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.732637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.732807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.732822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.732922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.732936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.733166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.733181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.733339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.733352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.733493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.682 [2024-12-13 09:37:31.733507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.682 qpair failed and we were unable to recover it. 00:26:19.682 [2024-12-13 09:37:31.733675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.733709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.733911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.733945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.734160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.734194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.734296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.734309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.734478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.734514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.734736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.734770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.734978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.735012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.735223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.735236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.735305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.735318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.735485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.735501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.735709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.735723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.735947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.735981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.736258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.736293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.736571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.736606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.736806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.736840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.736982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.737016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.737198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.737232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.737367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.737402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.737615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.737650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.737861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.737895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.738078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.738112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.738365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.738398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.738686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.738721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.739020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.739052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.739252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.739287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.739535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.739572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.739779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.739812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.740076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.740110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.740288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.740321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.740598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.740634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.740830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.740864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.741056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.741069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.741224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.741237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.741399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.741412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.741559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.741593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.741884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.741918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.742104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.742137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.742380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.742420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.742711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.683 [2024-12-13 09:37:31.742747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.683 qpair failed and we were unable to recover it. 00:26:19.683 [2024-12-13 09:37:31.742878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.742904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.743126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.743161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.743425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.743490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.743701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.743735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.744012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.744047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.744246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.744281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.744394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.744408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.744649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.744662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.744799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.744813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.744965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.744978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.745189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.745224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.745355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.745388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.745678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.745713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.745896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.745930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.746194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.746207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.746377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.746390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.746565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.746600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.746903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.746938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.747126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.747139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.747299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.747333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.747612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.747647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.747843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.747877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.748035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.748047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.748210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.748234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.748321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.748333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.748492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.748506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.748744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.748777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.749029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.749062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.749316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.749349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.749529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.749565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.749841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.749876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.750174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.750186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.750339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.750352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.750501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.750515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.750653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.750665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.750830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.750843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.751154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.751188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.751492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.751528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.684 [2024-12-13 09:37:31.751821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.684 [2024-12-13 09:37:31.751860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.684 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.752086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.752120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.752399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.752432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.752579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.752617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.752893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.752928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.753111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.753124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.753358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.753392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.753656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.753692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.753942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.753977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.754165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.754178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.754349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.754381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.754583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.754618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.754868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.754903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.755134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.755147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.755378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.755392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.755543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.755557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.755645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.755657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.755895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.755928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.756144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.756178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.756360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.756407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.756614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.756627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.756800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.756813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.756976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.757011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.757215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.757250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.757479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.757514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.757710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.757743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.758019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.758052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.758266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.758302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.758486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.758522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.758645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.758659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.758818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.758831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.759041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.759074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.759252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.759265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.759497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.759533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.759729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.759763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.759903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.759937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.760137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.760171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.760390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.760425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.760715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.760728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.685 qpair failed and we were unable to recover it. 00:26:19.685 [2024-12-13 09:37:31.760872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.685 [2024-12-13 09:37:31.760885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.761113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.761152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.761432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.761476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.761683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.761717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.761933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.761970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.762170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.762183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.762366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.762399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.762671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.762704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.762965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.763000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.763216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.763251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.763549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.763585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.763770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.763803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.763998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.764032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.764234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.764247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.764444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.764487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.764777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.764811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.765030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.765064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.765261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.765294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.765597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.765611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.765774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.765787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.766020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.766054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.766323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.766357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.766649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.766684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.766844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.766878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.767087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.767121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.767369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.767382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.767532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.767547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.767637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.767647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.767875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.767908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.768209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.768243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.768537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.768572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.768772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.768805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.769058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.769092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.769298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.769312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.769526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.769561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.769874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.769909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.770094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.770127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.770337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.770351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.770516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.770529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.686 [2024-12-13 09:37:31.770770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.686 [2024-12-13 09:37:31.770802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.686 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.771011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.771046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.771343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.771383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.771667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.771703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.771920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.771954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.772171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.772205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.772411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.772444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.772744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.772779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.773067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.773101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.773411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.773446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.773704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.773739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.773967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.774000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.774304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.774338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.774615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.774629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.774798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.774832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.775117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.775150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.775303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.775318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.775559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.775595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.775832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.775866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.776078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.776112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.776248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.776261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.776530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.776543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.776789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.776803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.776984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.776997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.777159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.777172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.777404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.777438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.777569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.777609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.777844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.777878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.778072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.778106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.778345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.778380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.778704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.778719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.778871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.778884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.778959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.778971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.779122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.779133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.779380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.687 [2024-12-13 09:37:31.779433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.687 qpair failed and we were unable to recover it. 00:26:19.687 [2024-12-13 09:37:31.779739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.779781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.780085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.780121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.780358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.780393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.780691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.780727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.780992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.781026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.781170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.781204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.781342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.781376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.781508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.781528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.781691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.781711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.781879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.781897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.781986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.782002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.782198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.782232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.782519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.782555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.782834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.782868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.783153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.783188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.783470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.783505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.783794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.783828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.784051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.784085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.784291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.784326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.784601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.784620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.784870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.784889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.785084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.785106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.785281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.785300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.785472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.785492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.785762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.785797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.785989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.786024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.786238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.786272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.786466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.786486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.786767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.786786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.786975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.786995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.787216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.787235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.787489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.787509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.787618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.787635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.787865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.787899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.788123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.788158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.788445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.788489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.788759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.788793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.789020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.789055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.789340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.789374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.688 [2024-12-13 09:37:31.789679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.688 [2024-12-13 09:37:31.789716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.688 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.789978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.790013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.790209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.790243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.790502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.790522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.790797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.790816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.791027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.791046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.791291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.791310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.791482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.791502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.791697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.791730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.791945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.791987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.792135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.792154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.792324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.792343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.792566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.792586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.792843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.792861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.793084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.793103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.793276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.793295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.793455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.793475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.793640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.793675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.793934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.793969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.794170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.794204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.794471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.794492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.794653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.794672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.794824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.794858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.795170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.795206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.795421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.795463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.795689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.795725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.796031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.796064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.796312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.796347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.796615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.796651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.796880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.796914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.797214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.797248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.797402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.797422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.797599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.797636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.797940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.797974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.798236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.798271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.798491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.798528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.798831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.798866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.799151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.799185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.799495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.799531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.799810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.799844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.689 [2024-12-13 09:37:31.800155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.689 [2024-12-13 09:37:31.800190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.689 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.800415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.800456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.800576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.800595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.800846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.800865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.801167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.801202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.801346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.801380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.801679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.801699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.801814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.801832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.802016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.802035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.802199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.802218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.802462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.802485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.802670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.802689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.802970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.802990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.803261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.803280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.803433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.803458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.803717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.803735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.803983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.804002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.804288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.804324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.804619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.804655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.804843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.804878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.805153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.805187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.805484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.805520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.805655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.805689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.805833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.805868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.806078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.806113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.806407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.806426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.806655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.806675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.806898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.806917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.807083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.807101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.807187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.807204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.807443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.807468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.807569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.807585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.807782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.807816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.808075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.808109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.808323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.808370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.808560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.808580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.808750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.808769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.808961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.809002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.809282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.809316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.809592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.690 [2024-12-13 09:37:31.809612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.690 qpair failed and we were unable to recover it. 00:26:19.690 [2024-12-13 09:37:31.809717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.809736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.809997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.810033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.810237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.810271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.810478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.810513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.810713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.810747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.810947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.810982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.811267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.811286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.811459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.811479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.811664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.811685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.811869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.811904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.812139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.812173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.812563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.812644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.812952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.812992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.813278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.813313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.813616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.813654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.813863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.813899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.814106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.814140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.814338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.814371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.814649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.814685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.814941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.814976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.815173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.815207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.815456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.815476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.815698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.815717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.815887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.815905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.816068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.816112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.816339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.816373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.816653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.816688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.816905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.816940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.817199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.817234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.817547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.817583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.817877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.817911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.818144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.818179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.818369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.818387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.818559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.818579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.691 [2024-12-13 09:37:31.818755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.691 [2024-12-13 09:37:31.818789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.691 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.819066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.819100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.819379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.819413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.819700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.819734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.819926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.819961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.820102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.820135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.820271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.820305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.820543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.820564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.820650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.820667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.820829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.820849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.820961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.820979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.821145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.821164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.821424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.821471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.821771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.821805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.822075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.822110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.822317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.822351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.823052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.823082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.823349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.823368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.823615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.823635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.823858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.823877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.824047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.824066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.824192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.824208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.824377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.824395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.824644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.824663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.824884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.824902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.825141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.825159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.825368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.825387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.825611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.825634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.825885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.825905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.826153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.826172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.826279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.826301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.826414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.826433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.826628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.826649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.826897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.826917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.827020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.827039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.827256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.827275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.827439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.827466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.827631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.827650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.827849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.827867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.828137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.692 [2024-12-13 09:37:31.828156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.692 qpair failed and we were unable to recover it. 00:26:19.692 [2024-12-13 09:37:31.828381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.828401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.828643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.828663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.828905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.828924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.829092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.829111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.829278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.829296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.829550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.829570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.829838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.829857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.830100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.830119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.830355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.830373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.830468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.830485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.830597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.830616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.830789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.830808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.831062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.831081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.831328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.831347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.831543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.831563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.831715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.831734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.831843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.831862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.832141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.832191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.832471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.832523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.832781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.832821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.833013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.833028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.833218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.833231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.833383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.833397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.833637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.833652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.833907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.833921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.834020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.834033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.834267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.834281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.834514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.834528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.834634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.834646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.834760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.834774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.834932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.834970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.835122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.835133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.835318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.835335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.835490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.835507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.835726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.835739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.835919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.835934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.836086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.836101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.836282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.836296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.693 [2024-12-13 09:37:31.836443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.693 [2024-12-13 09:37:31.836509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.693 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.836768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.836804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.836955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.836988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.837291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.837338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.837434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.837454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.837617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.837631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.837827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.837840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.838070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.838083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.838249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.838263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.838378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.838411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.838652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.838688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.838903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.838937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.839149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.839184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.839456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.839470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.839687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.839701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.839803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.839815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.840078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.840093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.840204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.840217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.840394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.840436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.840718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.840754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.840887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.840923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.841050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.841084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.841269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.841303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.841521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.841535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.841690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.841705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.841854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.841888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.842102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.842138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.842425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.842439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.842540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.842578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.842728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.842765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.843067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.843103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.843362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.843396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.843638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.843654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.843827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.843863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.844127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.844161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.844437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.844455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.844686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.844701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.844861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.844876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.845089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.845102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.694 [2024-12-13 09:37:31.845345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.694 [2024-12-13 09:37:31.845358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.694 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.845463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.845476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.845570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.845583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.845689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.845701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.845871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.845894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.846012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.846030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.846256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.846275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.846443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.846476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.846634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.846654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.846920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.846955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.847149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.847167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.847349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.847368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.847532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.847551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.847642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.847657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.847882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.847899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.848138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.848156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.848302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.848320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.848496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.848516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.848678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.848695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.848930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.848950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.849089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.849112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.849288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.849310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.849596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.849619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.849805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.849824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.850066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.850083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.850241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.850260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.850441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.850466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.850614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.850631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.850738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.850754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.850926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.850946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.851101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.851118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.851387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.851406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.851589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.851612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.851786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.851807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.851980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.852003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.852100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.852114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.695 [2024-12-13 09:37:31.852379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.695 [2024-12-13 09:37:31.852396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.695 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.852563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.852580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.852862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.852880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.853053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.853071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.853311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.853327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.853562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.853581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.853683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.853702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.853851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.853871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.854033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.854050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.854167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.854183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.854352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.854370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.854619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.854643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.854840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.854861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.855028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.855050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.855172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.855192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.855365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.855387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.855617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.855638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.855808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.855825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.855996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.856012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.856253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.856269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.856441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.856467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.856573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.856589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.856759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.856779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.856963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.856981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.857108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.857129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.857391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.857414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.857609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.857634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.857864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.857886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.858082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.858102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.858208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.858225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.858454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.858472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.858628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.858644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.858745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.858759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.858980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.858996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.859160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.859175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.859410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.859430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.859663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.859710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.860040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.860086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.696 [2024-12-13 09:37:31.860378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.696 [2024-12-13 09:37:31.860425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.696 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.860627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.860648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.860826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.860846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.861095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.861114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.861295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.861315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.861494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.861515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.861603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.861620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.861815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.861833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.862003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.862021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.862119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.862136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.862234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.862252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.862496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.862516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.862762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.862781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.863001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.863040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.863283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.863301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.863472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.863489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.863728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.863744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.863962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.863981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.864177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.864200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.864458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.864476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.864661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.864682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.864913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.864938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.865180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.865203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.865297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.865313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.865503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.865527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.865705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.865724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.865904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.865926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.866105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.866122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.866362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.866379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.866531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.866548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.866810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.866831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.866948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.866968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.867191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.867210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.867377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.867395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.867668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.867698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.867955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.867975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.868107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.868127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.868354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.868376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.868502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.868520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.868684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.697 [2024-12-13 09:37:31.868701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.697 qpair failed and we were unable to recover it. 00:26:19.697 [2024-12-13 09:37:31.868819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.868834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.868934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.868950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.869053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.869068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.869222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.869240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.869337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.869351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.869574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.869595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.869796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.869813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.870009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.870025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.870264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.870285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.870501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.870527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.870705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.870726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.870924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.870945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.871055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.871074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.871250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.871276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.871542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.871563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.871781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.871799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.872029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.872047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.872218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.872236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.872463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.872482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.872724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.872742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.872913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.872932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.873195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.873213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.873371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.873392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.873576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.873597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.873762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.873782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.874037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.874055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.874228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.874246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.874412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.874430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.874628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.874647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.874859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.874877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.874964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.874980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.875145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.875163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.875408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.875426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.875670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.875689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.875864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.875881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.875980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.875998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.876161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.876180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.876266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.876282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.698 [2024-12-13 09:37:31.876495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.698 [2024-12-13 09:37:31.876515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.698 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.876596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.876613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.876796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.876815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.876967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.876986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.877223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.877241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.877418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.877437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.877537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.877554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.877844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.877864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.878108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.878125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.878217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.878233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.878472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.878492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.878589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.878605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.878773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.878793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.878890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.878908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.879090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.879109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.879275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.879302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.879469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.879489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.879668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.879687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.879902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.879919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.880066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.880086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.880341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.880360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.880513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.880532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.880775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.880794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.880945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.880963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.881068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.881086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.881233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.881252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.881497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.881517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.881633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.881651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.881841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.881859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.881974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.881994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.882158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.882177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.882415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.882433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.882659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.882678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.882769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.882787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.883023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.883042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.883141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.883158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.883330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.883349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.883570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.883589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.883687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.883703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.699 [2024-12-13 09:37:31.883971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.699 [2024-12-13 09:37:31.883989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.699 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.884176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.884196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.884457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.884477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.884694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.884715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.884863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.884881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.885032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.885051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.885198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.885216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.885385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.885404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.885638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.885656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.885873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.885890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.886063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.886081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.886240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.886259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.886418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.886436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.886521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.886538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.886681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.886699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.886894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.886913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.887008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.887031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.887181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.887199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.887291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.887307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.887512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.887531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.887678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.887697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.887812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.887830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.888049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.888067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.888239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.888259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.888474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.888493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.888644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.888663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.888862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.888881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.889102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.889120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.889429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.889453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.889539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.889555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.889710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.889728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.889902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.889920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.890159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.890177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.890282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.890300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.890485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.890505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.700 qpair failed and we were unable to recover it. 00:26:19.700 [2024-12-13 09:37:31.890609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.700 [2024-12-13 09:37:31.890628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.890865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.890883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.891111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.891130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.891223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.891241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.891467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.891486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.891644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.891662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.891899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.891918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.892097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.892116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.892273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.892291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.892368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.892384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.892547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.892567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.892731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.892748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.892985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.893003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.893171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.893190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.893436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.893459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.893616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.893635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.893739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.893757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.893939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.893957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.894109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.894127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.894289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.894307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.894540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.894560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.894791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.894813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.894978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.894999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.895146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.895164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.895401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.895419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.895671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.895689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.895795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.895813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.896056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.896074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.896303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.896322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.896469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.896488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.896589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.896606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.896855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.896874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.896993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.897010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.897250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.897270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.897485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.897505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.897676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.897694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.897883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.897901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.898088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.898105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.898341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.898359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.701 qpair failed and we were unable to recover it. 00:26:19.701 [2024-12-13 09:37:31.898513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.701 [2024-12-13 09:37:31.898532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.898769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.898788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.898935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.898953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.899192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.899211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.899459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.899477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.899648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.899666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.899902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.899921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.900101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.900119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.900220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.900237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.900475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.900494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.900706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.900723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.900983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.901001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.901149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.901167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.901402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.901422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.901613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.901634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.901731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.901749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.901986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.902004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.902155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.902173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.902415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.902434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.902632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.902659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.902813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.902832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.903050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.903068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.903252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.903274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.903440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.903466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.903646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.903664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.903811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.903828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.904065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.904083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.904322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.904340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.904553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.904572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.904750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.904768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.904921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.904941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.905182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.905201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.905290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.905307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.905490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.905508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.905678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.905697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.905915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.905934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.906045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.906065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.906229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.906249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.906342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.906358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.702 qpair failed and we were unable to recover it. 00:26:19.702 [2024-12-13 09:37:31.906589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.702 [2024-12-13 09:37:31.906608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.906710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.906729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.906824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.906841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.907006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.907024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.907183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.907201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.907366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.907384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.907542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.907562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.907723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.907741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.907966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.907985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.908196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.908214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.908404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.908424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.908528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.908545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.908735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.908753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.908928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.908947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.909093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.909112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.909272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.909290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.909475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.909494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.909680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.909698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.909888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.909906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.910001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.910018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.910233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.910252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.910415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.910434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.910651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.910671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.910836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.910857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.911112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.911130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.911386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.911403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.911573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.911592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.911747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.911764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.911950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.911968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.912055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.912073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.912168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.912184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.912347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.912365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.912469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.912489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.912713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.912731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.912843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.912862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.913005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.913022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.913257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.913276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.913426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.913444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.913687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.913707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.703 [2024-12-13 09:37:31.913814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.703 [2024-12-13 09:37:31.913832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.703 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.913917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.913934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.914147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.914164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.914339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.914357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.914512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.914530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.914678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.914697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.914784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.914800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.914912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.914929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.915166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.915185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.915328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.915345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.915566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.915585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.915755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.915773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.915876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.915893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.916054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.916072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.916238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.916256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.916423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.916441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.916603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.916621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.916828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.916846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.917085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.917103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.917196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.917212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.917367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.917386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.917532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.917551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.917734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.917753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.917843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.917861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.917953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.917972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.918196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.918214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.918424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.918442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.918592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.918611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.918768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.918785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.918964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.918983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.919135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.919153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.919408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.919426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.919595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.919613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.919780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.919798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.919955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.704 [2024-12-13 09:37:31.919973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.704 qpair failed and we were unable to recover it. 00:26:19.704 [2024-12-13 09:37:31.920194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.920214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.920401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.920420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.920531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.920550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.920697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.920715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.920871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.920888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.921051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.921069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.921307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.921325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.921565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.921583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.921791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.921809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.922068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.922085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.922268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.922287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.922557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.922576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.922672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.922690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.922854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.922872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.923050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.923070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.923179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.923197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.923467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.923488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.923650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.923668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.923765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.923783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.924033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.924051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.924280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.924298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.924539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.924557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.924664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.924682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.924933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.924950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.925162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.925181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.925336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.925353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.925612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.925631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.925786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.925804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.925913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.925929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.926092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.926115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.926280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.926297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.926459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.926478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.926712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.926731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.926918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.926936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.927120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.927138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.927328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.927346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.927438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.927463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.927689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.927707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.927851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.705 [2024-12-13 09:37:31.927869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.705 qpair failed and we were unable to recover it. 00:26:19.705 [2024-12-13 09:37:31.928013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.928032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.928174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.928192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.928355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.928373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.928518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.928535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.928786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.928804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.928995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.929013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.929246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.929263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.929417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.929437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.929624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.929642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.929761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.929778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.930014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.930031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.930189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.930207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.930378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.930396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.930628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.930646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.930866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.930885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.931059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.931077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.931250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.931268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.931363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.931383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.931543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.931561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.931772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.931790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.931958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.931976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.932193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.932211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.932385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.932403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.932584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.932613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.932764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.932781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.932941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.932959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.933104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.933122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.933332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.933349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.933594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.933613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.933761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.933780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.933990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.934011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.934222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.934240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.934345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.934363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.934577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.934596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.934806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.934825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.934929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.934946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.935097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.935114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.935269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.935288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.935439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.706 [2024-12-13 09:37:31.935462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.706 qpair failed and we were unable to recover it. 00:26:19.706 [2024-12-13 09:37:31.935630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.935648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.935742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.935759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.935921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.935939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.936151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.936169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.936403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.936420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.936581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.936600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.936759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.936777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.936929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.936947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.937026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.937043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.937185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.937204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.937353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.937370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.937529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.937548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.937782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.937800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.938021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.938040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.938198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.938215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.938474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.938493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.938709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.938727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.938935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.938952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.939059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.939090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.939382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.939400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.939631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.939665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.939899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.939918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.940018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.940034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.940258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.940276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.940452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.940470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.940623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.940640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.940890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.940908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.941053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.941070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.941236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.941253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.941518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.941536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.941691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.941709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.941942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.941959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.942109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.942127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.942277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.942295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.942521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.942540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.942771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.942789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.942941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.942959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.943119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.943136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.943236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.943251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.707 qpair failed and we were unable to recover it. 00:26:19.707 [2024-12-13 09:37:31.943507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-13 09:37:31.943527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.943776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.943793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.943949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.943966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.944213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.944231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.944321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.944336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.944445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.944468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.944555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.944578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.944729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.944747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.944897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.944914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.945063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.945080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.945336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.945353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.945510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.945528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.945757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.945774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.945929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.945946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.946177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.946195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.946403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.946420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.946692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.946711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.946857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.946874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.947028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.947045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.947311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.947329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.947567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.947586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.947678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.947693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.947869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.947887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.948071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.948087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.948313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.948330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.948475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.948493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.948757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.948775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.948882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.948899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.949093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.949110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.949364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.949382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.949617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.949636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.949798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.949815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.949898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.949914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.950158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.950178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.950437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.950458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.950696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.950714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.950855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.950872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.708 [2024-12-13 09:37:31.951024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.708 [2024-12-13 09:37:31.951042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.708 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.951189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.951207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.951472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.951490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.951721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.951738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.951986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.952003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.952181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.952199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.952460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.952478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.952738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.952755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.952915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.952932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.953082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.953100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.953271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.953291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.953528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.953545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.953656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.953673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.953826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.953843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.953996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.954013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.954169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.954187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.954338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.954355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.954533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.954552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.954786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.954804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.955047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.955065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.955211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.955228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.955389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.955407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.955617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.955635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.955855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.955876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.956143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.956160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.956312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.956330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.956495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.956513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.956610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.956626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.956780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.956798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.957042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.957060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.957294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.957312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.957457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.957474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.957637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.957655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.957827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.957845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.958017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.958035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.709 qpair failed and we were unable to recover it. 00:26:19.709 [2024-12-13 09:37:31.958264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.709 [2024-12-13 09:37:31.958281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.958375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.958391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.958546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.958564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.958787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.958805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.958948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.958966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.959157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.959175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.959359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.959376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.959530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.959548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.959647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.959662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.959901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.959918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.960026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.960043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.960276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.960293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.960487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.960505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.960735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.960753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.960895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.960912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.961071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.961090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.961251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.961268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.961506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.961523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.961777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.961794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.961877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.961894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.962101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.962118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.962375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.962393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.962601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.962619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.962774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.962792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.962956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.962974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.963140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.963157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.963386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.963404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.963611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.963629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.963804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.963825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.963983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.964000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.964255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.964273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.964430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.964453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.964730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.964748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.964964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.964981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.965215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.965233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.965320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.965336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.965587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.965605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.965711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.965729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.965904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.965921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.710 qpair failed and we were unable to recover it. 00:26:19.710 [2024-12-13 09:37:31.966097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.710 [2024-12-13 09:37:31.966114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.966337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.966355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.966532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.966550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.966783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.966800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.967013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.967030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.967212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.967229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.967377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.967394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.967572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.967590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.967829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.967847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.968021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.968038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.968323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.968341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.968519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.968537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.968743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.968760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.968901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.968919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.969147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.969164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.969394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.969410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.969564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.969583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.969734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.969752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.969958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.969975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.970191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.970208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.970310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.970327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.970536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.970555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.970721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.970738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.970978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.970996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.971224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.971241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.971403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.971420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.971633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.971651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.971824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.971841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.972060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.972078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.972233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.972254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.972443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.972465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.972633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.972650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.972791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.972809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.972962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.972980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.973213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.973230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.973382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.973399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.973633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.973651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.973888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.973905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.974072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.974089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.711 qpair failed and we were unable to recover it. 00:26:19.711 [2024-12-13 09:37:31.974239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.711 [2024-12-13 09:37:31.974256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.974498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.974516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.974591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.974607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.974758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.974775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.975006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.975023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.975252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.975269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.975477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.975495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.975664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.975682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.975835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.975852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.976011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.976029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.976200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.976217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.976369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.976386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.976546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.976564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.976775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.976792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.976948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.976965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.977143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.977160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.977314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.977331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.977563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.977585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.977746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.977764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.978024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.978042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.978217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.978234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.978407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.978425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.978589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.978607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.978815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.978833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.978979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.978996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.979147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.979165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.979381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.979399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.979613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.979631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.979715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.979731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.979961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.979979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.980209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.980227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.980345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.980363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.980524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.980542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.980618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.980634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.980776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.980794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.980962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.980980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.981218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.981236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.981440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.981468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.981677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.981695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.712 [2024-12-13 09:37:31.981845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.712 [2024-12-13 09:37:31.981863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.712 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.982070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.982087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.982321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.982339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.982600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.982618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.982864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.982881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.983042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.983060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.983206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.983224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.983329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.983347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.983490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.983509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.983740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.983758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.983907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.983924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.984154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.984172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.984447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.984469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.984668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.984685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.984861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.984879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.985032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.985050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.985259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.985276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.985434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.985458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.985612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.985633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.985841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.985858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.986065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.986083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.986319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.986336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.986494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.986513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.986670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.986687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.986846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.986863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.987041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.987059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.987268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.987286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.987439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.987462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.987621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.987638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.987806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.987823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.987983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.988001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.988150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.988167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.988403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.988420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.988503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.988519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.988747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.713 [2024-12-13 09:37:31.988765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.713 qpair failed and we were unable to recover it. 00:26:19.713 [2024-12-13 09:37:31.988969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.988986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.989143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.989160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.989316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.989333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.989591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.989609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.989785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.989803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.990011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.990028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.990285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.990303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.990458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.990477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.990635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.990652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.990832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.990850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.991010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.991027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.991277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.991295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.991530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.991548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.991639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.991655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.991902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.991920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.992084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.992101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.992272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.992290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.992491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.992509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.992676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.992693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.992974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.992991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.993134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.993151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.993305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.993322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.993593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.993611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.993778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.993798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.993949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.993966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.994171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.994188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.994407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.994424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.994638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.994656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.994885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.994903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.995118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.995135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.995393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.995410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.995619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.995637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.995795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.995813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.995974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.995991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.996219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.996236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.996402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.996420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.996671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.996690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.996838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.996855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.997015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.714 [2024-12-13 09:37:31.997032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.714 qpair failed and we were unable to recover it. 00:26:19.714 [2024-12-13 09:37:31.997173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:31.997191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:31.997294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:31.997311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:31.997545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:31.997563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:31.997726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:31.997743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:31.997994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:31.998012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:31.998195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:31.998213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:31.998322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:31.998339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:31.998547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:31.998566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:31.998802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:31.998819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:31.998975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:31.998993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:31.999225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:31.999242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:31.999391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:31.999409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:31.999616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:31.999634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:31.999850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:31.999868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:31.999953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:31.999970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.000058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.000075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.000282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.000299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.000474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.000503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.000691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.000708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.000916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.000933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.001085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.001103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.001323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.001341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.001548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.001566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.001773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.001790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.002026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.002045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.002144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.002161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.002395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.002413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.002571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.002589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.002769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.002786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.003046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.003063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.003292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.003310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.003542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.003561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.003792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.003809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.003950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.003967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.004190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.004208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.004417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.004434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.004608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.004626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.004781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.004799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.004954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.715 [2024-12-13 09:37:32.004972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.715 qpair failed and we were unable to recover it. 00:26:19.715 [2024-12-13 09:37:32.005211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.005228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.005378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.005396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.005626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.005644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.005837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.005854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.006104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.006122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.006354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.006372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.006525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.006543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.006692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.006709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.006872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.006889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.007120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.007138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.007396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.007413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.007639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.007657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.007799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.007817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.007920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.007937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.008080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.008098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.008261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.008278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.008524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.008542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.008693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.008710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.008896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.008914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.009056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.009073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.009235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.009253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.009428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.009446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.009716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.009734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.009900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.009917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.010005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.010022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.010171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.010192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.010355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.010373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.010526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.010545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.010648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.010665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.010764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.010783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.010932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.010950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.011204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.011221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.011461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.011480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.011689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.011708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.011902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.011920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.012010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.012027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.012110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.012126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.012298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.012315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.012494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.716 [2024-12-13 09:37:32.012512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.716 qpair failed and we were unable to recover it. 00:26:19.716 [2024-12-13 09:37:32.012619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.012637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.012788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.012806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.012983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.013001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.013141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.013158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.013392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.013410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.013524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.013543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.013637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.013653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.013862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.013880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.014127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.014146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.014293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.014311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.014467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.014485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.014687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.014704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.014856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.014873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.015036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.015054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.015236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.015254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.015486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.015504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.015659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.015676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.015882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.015900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.016125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.016142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.016246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.016262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.016469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.016497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.016600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.016617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.016771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.016789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.016997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.017014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.017160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.017178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.017278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.017296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.017437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.017468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.017673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.017690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.017884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.017902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.018108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.018126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.018209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.018225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.018302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.018318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.018462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.018480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.018717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.018734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.018965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.018982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.019148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.019166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.019396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.019413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.019678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.019696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.019853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.019870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.717 qpair failed and we were unable to recover it. 00:26:19.717 [2024-12-13 09:37:32.019956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.717 [2024-12-13 09:37:32.019972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.020157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.020174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.020382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.020399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.020475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.020492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.020585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.020601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.020742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.020759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.020925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.020943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.021162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.021178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.021385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.021402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.021580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.021598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.021739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.021756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.021858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.021875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.022063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.022081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.022260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.022277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.022467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.022485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.022623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.022641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.022884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.022901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.023156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.023173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.023346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.023364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.718 qpair failed and we were unable to recover it. 00:26:19.718 [2024-12-13 09:37:32.023540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.718 [2024-12-13 09:37:32.023559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.023720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.023738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.023898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.023916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.024073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.024091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.024299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.024317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.024526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.024544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.024640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.024656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.024802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.024820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.025080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.025101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.025325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.025343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.025565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.025583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.025756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.025773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.025929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.025947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.026093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.026110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.026266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.026283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.026442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.026465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.026615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.026633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.026741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.026756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.026859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.026876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.026970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.026986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.996 [2024-12-13 09:37:32.027141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.996 [2024-12-13 09:37:32.027159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.996 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.027381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.027398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.027629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.027647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.027902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.027919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.028079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.028096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.028274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.028291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.028375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.028391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.028553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.028571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.028731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.028749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.028931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.028948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.029192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.029210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.029439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.029461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.029692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.029710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.029866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.029883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.030025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.030042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.030215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.030246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.030515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.030538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.030704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.030722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.030862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.030879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.031090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.031107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.031258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.031275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.031497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.031514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.031728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.031745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.032004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.032021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.032180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.032198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.032288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.032305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.032536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.032554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.032742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.032760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.032873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.032893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.033076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.033093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.033259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.033277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.033428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.033445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.033628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.033646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.033805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.033822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.033995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.034013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.034158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.034175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.034392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.034410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.034572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.034591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.034746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.034763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.034918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.034935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.035027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.035044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.035184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.035201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.035370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.035388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.035598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.035616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.035825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.035843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.036120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.036137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.036316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.036333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.036499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.036517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.036724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.036741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.036893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.036911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.037017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.037034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.037263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.037281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.037363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.037379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.037464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.037481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.037687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.037705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.037887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.037909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.038077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.038095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.038314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.038331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.038566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.038585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.038843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.038861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.039021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.039038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.039194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.039212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.039443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.997 [2024-12-13 09:37:32.039472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.997 qpair failed and we were unable to recover it. 00:26:19.997 [2024-12-13 09:37:32.039571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.039592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.039751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.039769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.039910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.039927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.040102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.040119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.040285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.040302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.040400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.040416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.040576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.040594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.040753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.040770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.040929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.040946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.041151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.041169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.041385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.041403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.041548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.041567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.041640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.041656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.041799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.041815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.041985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.042002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.042233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.042251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.042497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.042515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.042672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.042689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.042923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.042940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.043148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.043169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.043344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.043361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.043512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.043530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.043697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.043715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.043921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.043939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.044094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.044112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.044324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.044341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.044510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.044528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.044706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.044723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.044823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.044839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.045046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.045063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.045156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.045171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.045402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.045419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.045659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.045677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.045920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.045937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.046102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.046119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.046261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.046278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.046460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.046477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.046707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.046724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.046878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.046895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.047100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.047117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.047264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.047281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.047426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.047443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.047678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.047696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.047850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.047868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.048104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.048121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.048206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.048221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.048387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.048406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.048642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.048660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.048812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.048829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.048983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.049000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.049210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.049227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.049377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.049394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.049567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.049585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.049727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.049743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.049939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.049956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.050116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.050133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.050342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.050359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.050515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.050533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.050697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.050714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.050973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.050990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.051220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.051240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.051414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.051434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.051595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.051613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.051774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.051791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.998 qpair failed and we were unable to recover it. 00:26:19.998 [2024-12-13 09:37:32.052020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.998 [2024-12-13 09:37:32.052037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.052230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.052248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.052464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.052482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.052634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.052651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.052859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.052877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.053028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.053046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.053186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.053204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.053432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.053457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.053695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.053712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.053865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.053885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.054038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.054055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.054207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.054224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.054400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.054417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.054569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.054587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.054739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.054756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.054975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.054993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.055201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.055219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.055494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.055512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.055672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.055689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.055861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.055878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.056091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.056108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.056319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.056337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.056492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.056510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.056757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.056774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.057013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.057030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.057182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.057199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.057363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.057380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.057630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.057648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.057875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.057893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.058045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.058063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.058168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.058186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.058334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.058352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.058538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.058555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.058726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.058744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.058979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.058996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.059220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.059238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.059479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.059498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.059752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.059770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.059942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.059960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.060104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.060121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.060293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.060311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.060458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.060475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.060710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.060727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.060943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.060961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.061065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.061083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.061292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.061310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.061459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.061478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.061690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.061707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.061938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.061956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.062045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.062064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.062228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.062246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.062459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.062477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.062726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.062744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.062889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.062906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.063051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.063068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.063225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.063243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:19.999 [2024-12-13 09:37:32.063413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.999 [2024-12-13 09:37:32.063431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:19.999 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.063699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.063719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.063883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.063900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.064136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.064153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.064297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.064315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.064551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.064569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.064758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.064776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.065013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.065030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.065205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.065223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.065460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.065478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.065565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.065581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.065814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.065832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.066060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.066077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.066183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.066200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.066303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.066320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.066408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.066424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.066569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.066587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.066809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.066827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.067055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.067073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.067218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.067236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.067338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.067355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.067442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.067462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.067692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.067709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.067955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.067973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.068230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.068248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.068336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.068352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.068500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.068518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.068726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.068744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.068985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.069002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.069208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.069225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.069483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.069501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.069709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.069727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.069935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.069952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.070185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.070203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.070359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.070377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.070533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.070552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.070721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.070738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.070916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.070933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.071176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.071193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.071433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.071456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.071712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.071730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.071884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.071902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.072110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.072127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.072276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.072294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.072504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.072522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.072697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.072715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.072868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.072885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.073047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.073065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.073296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.073313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.073491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.073509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.073717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.073735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.073904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.073922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.074149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.074167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.074398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.074415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.074567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.074585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.074764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.074781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.074988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.075005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.075220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.075237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.075481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.075499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.075706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.075723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.075975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.075996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.076233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.076251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.076473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.076492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.076699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.076716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.076803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.000 [2024-12-13 09:37:32.076819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.000 qpair failed and we were unable to recover it. 00:26:20.000 [2024-12-13 09:37:32.076995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.077013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.077193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.077211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.077419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.077436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.077616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.077634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.077873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.077890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.078128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.078145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.078353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.078371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.078581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.078600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.078756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.078773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.078954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.078972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.079208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.079225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.079376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.079393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.079575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.079592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.079772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.079789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.079933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.079951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.080114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.080131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.080353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.080370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.080533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.080551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.080734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.080751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.080840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.080856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.081091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.081108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.081339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.081356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.081505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.081523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.081763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.081780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.081939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.081957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.082158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.082176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.082279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.082295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.082456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.082474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.082710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.082727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.082899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.082917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.083020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.083037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.083118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.083134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.083364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.083381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.083477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.083493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.083722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.083739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.083901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.083921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.084060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.084077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.084256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.084273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.084419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.084437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.084618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.084636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.084789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.084807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.085021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.085039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.085207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.085224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.085427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.085446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.085534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.085550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.085648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.085664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.085883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.085901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.086078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.086097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.086307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.086325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.086482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.086501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.086733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.086751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.086955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.086972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.087154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.087171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.087324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.087344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.087440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.087461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.087560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.087576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.087778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.087796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.087936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.087953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.088111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.088127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.088271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.088289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.088495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.088513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.088737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.088755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.088950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.088967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.001 qpair failed and we were unable to recover it. 00:26:20.001 [2024-12-13 09:37:32.089191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.001 [2024-12-13 09:37:32.089208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.089387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.089405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.089617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.089636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.089841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.089858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.090013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.090030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.090189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.090207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.090383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.090401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.090610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.090627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.090838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.090856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.091068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.091086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.091240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.091257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.091330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.091346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.091527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.091549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.091661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.091679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.091851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.091870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.092030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.092048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.092254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.092273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.092418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.092435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.092617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.092635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.092738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.092757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.092967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.092984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.093214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.093231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.093403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.093421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.093614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.093632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.093841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.093858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.093956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.093974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.094152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.094169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.094313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.094330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.094503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.094521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.094732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.094750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.094936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.094954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.095060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.095077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.095239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.095256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.095421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.095439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.095582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.095600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.095706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.095721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.095815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.095831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.095926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.095942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.096086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.096104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.096268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.096287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.096520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.096538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.096766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.096783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.096990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.097008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.097265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.097283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.097518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.097536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.097693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.097710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.097868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.097887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.098057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.098076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.098312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.098330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.098560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.098579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.098676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.098693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.098839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.098863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.099009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.099030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.099187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.099205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.099312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.099329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.099516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.099535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.099742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.099759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.099923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.002 [2024-12-13 09:37:32.099940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.002 qpair failed and we were unable to recover it. 00:26:20.002 [2024-12-13 09:37:32.100130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.100147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.100258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.100277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.100417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.100434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.100692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.100711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.100917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.100935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.101074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.101091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.101234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.101252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.101418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.101435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.101690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.101708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.101887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.101904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.102076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.102093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.102253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.102269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.102347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.102362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.102519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.102537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.102733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.102751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.102895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.102912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.103062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.103081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.103332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.103350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.103500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.103520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.103621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.103639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.103853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.103871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.104040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.104058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.104237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.104255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.104415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.104432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.104633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.104651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.104742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.104757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.104962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.104979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.105119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.105137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.105392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.105411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.105588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.105606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.105770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.105787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.106015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.106033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.106287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.106304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.106529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.106547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.106752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.106773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.107041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.107059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.107283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.107301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.107459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.107477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.107696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.107713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.107883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.107899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.108107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.108125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.108320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.108338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.108509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.108527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.108753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.108769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.109023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.109041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.109191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.109208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.109381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.109399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.109553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.109570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.109807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.109824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.109908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.109925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.110149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.110167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.110261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.110278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.110465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.110483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.110695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.110714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.110932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.110949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.111124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.111140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.111318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.111335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.111569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.111589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.111748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.111766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.111944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.111961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.112051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.112070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.112309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.112327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.112569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.112587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.003 [2024-12-13 09:37:32.112726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.003 [2024-12-13 09:37:32.112744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.003 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.112964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.112981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.113141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.113158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.113243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.113260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.113423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.113439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.113655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.113672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.113854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.113871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.114026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.114043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.114214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.114231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.114388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.114405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.114577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.114594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.114762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.114782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.114930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.114947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.115123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.115140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.115297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.115314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.115542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.115560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.115719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.115738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.115968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.115986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.116164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.116181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.116417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.116434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.116616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.116633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.116842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.116861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.117005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.117021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.117176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.117193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.117290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.117306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.117417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.117434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.117537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.117553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.117798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.117816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.117968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.117985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.118169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.118186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.118421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.118439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.118663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.118681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.118932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.118952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.119157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.119174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.119386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.119405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.119613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.119632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.119708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.119724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.119913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.119932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.120169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.120186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.120348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.120364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.120574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.120593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.120696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.120713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.120895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.120912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.121065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.121083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.121226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.121243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.121433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.121454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.121683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.121700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.121912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.121929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.122092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.122109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.122301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.122319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.122418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.122434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.122610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.122630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.122783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.122800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.122954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.122970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.123113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.123131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.123364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.123383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.123616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.123634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.123820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.123837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.124078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.004 [2024-12-13 09:37:32.124096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.004 qpair failed and we were unable to recover it. 00:26:20.004 [2024-12-13 09:37:32.124259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.124277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.124433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.124456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.124695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.124713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.124905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.124922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.125079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.125096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.125241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.125260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.125349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.125366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.125587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.125605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.125695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.125710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.125961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.125979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.126232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.126249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.126407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.126424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.126678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.126697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.126932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.126949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.127158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.127175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.127276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.127294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.127446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.127469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.127678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.127695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.127842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.127861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.127970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.127999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.128162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.128183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.128373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.128393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.128559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.128577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.128821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.128840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.129065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.129083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.129174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.129191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.129432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.129454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.129613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.129631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.129791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.129809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.130005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.130023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.130198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.130216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.130317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.130334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.130491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.130513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.130597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.130614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.130808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.130826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.131058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.131076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.131161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.131178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.131343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.131361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.131507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.131526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.131705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.131724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.131885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.131902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.131988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.132005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.132173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.132192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.132356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.132374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.132471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.132488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.132639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.132657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.132930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.132948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.133158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.133175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.133272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.133288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.133378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.133394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.133556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.133575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.133727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.133745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.133908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.133926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.134078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.134095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.134268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.134286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.134433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.134456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.134691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.134709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.134871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.134888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.135035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.135052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.135226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.135248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.135412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.135429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.135587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.135607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.135783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.135801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.135960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.005 [2024-12-13 09:37:32.135978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.005 qpair failed and we were unable to recover it. 00:26:20.005 [2024-12-13 09:37:32.136055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.136070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.136250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.136267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.136423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.136440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.136604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.136621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.136836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.136853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.137086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.137104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.137367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.137384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.137620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.137638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.137855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.137876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.138018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.138035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.138253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.138271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.138499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.138517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.138728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.138746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.138910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.138928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.139097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.139115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.139205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.139221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.139364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.139381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.139465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.139482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.139654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.139671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.139896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.139913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.140146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.140163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.140240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.140255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.140351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.140367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.140547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.140566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.140719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.140736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.140951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.140969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.141164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.141181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.141323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.141341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.141503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.141521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.141731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.141749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.141985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.142003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.142264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.142282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.142534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.142553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.142727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.142745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.142974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.142991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.143199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.143218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.143362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.143379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.143483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.143500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.143675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.143693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.143847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.143864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.144028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.144046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.144209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.144227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.144367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.144384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.144575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.144593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.144671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.144687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.144866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.144884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.145057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.145075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.145232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.145250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.145472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.145495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.145579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.145596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.145671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.145687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.145953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.145972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.146165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.146182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.146389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.146407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.146616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.146634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.146810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.146827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.146974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.146993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.147136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.147154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.147330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.147348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.147514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.147532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.147677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.147695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.147842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.147858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.006 [2024-12-13 09:37:32.147954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.006 [2024-12-13 09:37:32.147973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.006 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.148203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.148221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.148397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.148414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.148609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.148626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.148866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.148883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.148985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.149000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.149099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.149118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.149264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.149281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.149436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.149458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.149722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.149740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.149892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.149910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.150142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.150160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.150320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.150337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.150452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.150473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.150625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.150643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.150733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.150749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.150842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.150859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.151065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.151083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.151161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.151177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.151329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.151346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.151577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.151596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.151786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.151804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.151944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.151963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.152158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.152176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.152355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.152372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.152532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.152550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.152708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.152726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.152904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.152922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.153063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.153080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.153332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.153349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.153510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.153529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.153772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.153789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.153946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.153963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.154063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.154079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.154219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.154236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.154398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.154416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.154565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.154583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.154769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.154786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.154948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.154966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.155172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.155190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.155399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.155420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.155518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.155536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.155712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.155728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.155965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.155982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.156087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.156104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.156245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.156263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.156400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.156417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.156592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.156610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.156817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.156835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.156931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.156948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.157112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.157128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.157284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.157301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.157377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.157394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.157482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.157500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.157652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.157671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.157745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.157761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.157998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.158016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.158122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.158140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.158352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.158369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.158526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.158543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.007 [2024-12-13 09:37:32.158750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.007 [2024-12-13 09:37:32.158768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.007 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.158849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.158865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.159106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.159123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.159264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.159282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.159432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.159454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.159685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.159702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.159812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.159830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.160045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.160063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.160256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.160274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.160492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.160509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.160673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.160689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.160897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.160913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.161068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.161085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.161243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.161260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.161349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.161368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.161513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.161531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.161621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.161639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.161863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.161880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.161975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.161992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.162127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.162145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.162286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.162303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.162559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.162580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.162724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.162743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.162895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.162912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.163161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.163178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.163418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.163435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.163648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.163666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.163880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.163897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.164063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.164080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.164232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.164248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.164483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.164500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.164644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.164660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.164764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.164782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.164930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.164948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.165110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.165127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.165303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.165320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.165465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.165483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.165698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.165716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.165815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.165833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.166004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.166021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.166159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.166176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.166429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.166446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.166632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.166649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.166811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.166829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.166972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.166990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.167130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.167149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.167317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.167333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.167421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.167438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.167675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.167699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.167892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.167910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.168145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.168162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.168320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.168337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.168494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.168512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.168593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.168610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.168715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.168733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.168894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.168913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.169012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.169029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.169174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.169191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.169362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.169380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.169549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.169568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.169711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.169728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.170006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.170023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.170193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.170213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.170380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.170398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.170631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.170650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.170858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.008 [2024-12-13 09:37:32.170875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.008 qpair failed and we were unable to recover it. 00:26:20.008 [2024-12-13 09:37:32.171024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.171042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.171128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.171144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.171242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.171258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.171414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.171431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.171613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.171633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.171798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.171817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.171966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.171984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.172168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.172186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.172274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.172290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.172528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.172550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.172712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.172730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.172909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.172927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.173156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.173173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.173332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.173348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.173531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.173549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.173758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.173774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.174047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.174064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.174225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.174242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.174402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.174418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.174675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.174692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.174915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.174932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.175092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.175109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.175272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.175290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.175452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.175474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.175638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.175656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.175810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.175828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.175984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.176001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.176091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.176107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.176212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.176229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.176409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.176426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.176707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.176725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.176867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.176884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.177042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.177060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.177204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.177222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.177366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.177384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.177567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.177585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.177758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.177780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.177987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.178004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.178115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.178132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.178275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.178293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.178390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.178408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.178640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.178658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.178802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.178820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.179029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.179051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.179261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.179278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.179429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.179446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.179539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.179557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.179698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.179717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.179943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.179961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.180126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.180143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.180287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.180305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.180403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.180420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.180653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.180671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.180825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.180842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.181004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.181022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.181108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.181123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.181325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.181343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.181549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.181568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.181729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.181746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.181975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.181992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.182198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.182217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.182363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.182380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.182603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.182623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.182783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.009 [2024-12-13 09:37:32.182816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.009 qpair failed and we were unable to recover it. 00:26:20.009 [2024-12-13 09:37:32.183011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.183031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.183250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.183268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.183432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.183455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.183601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.183620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.183829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.183847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.184019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.184036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.184128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.184145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.184239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.184257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.184484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.184504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.184721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.184738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.185017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.185035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.185185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.185203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.185479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.185500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.185597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.185615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.185845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.185862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.186099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.186116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.186283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.186302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.186464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.186483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.186721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.186739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.187024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.187042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.187194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.187213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.187367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.187384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.187478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.187494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.187698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.187716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.187861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.187878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.188033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.188051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.188318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.188338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.188492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.188509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.188659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.188678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.188827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.188845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.189054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.189071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.189243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.189260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.189495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.189513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.189731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.189748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.189894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.189911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.190064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.190081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.190332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.190350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.190601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.190619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.190764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.190782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.191012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.191030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.191256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.191274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.191429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.191452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.191609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.191627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.191833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.191851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.192012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.192029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.192242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.192261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.192437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.192460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.192621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.192638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.192885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.192902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.193111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.193128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.193378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.193397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.193547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.193566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.193794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.193814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.193966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.193984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.194147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.194164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.194312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.194329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.194579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.194598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.194773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.194790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.194937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.194955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.010 [2024-12-13 09:37:32.195109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.010 [2024-12-13 09:37:32.195127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.010 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.195271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.195289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.195495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.195514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.195724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.195742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.195836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.195851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.196013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.196031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.196263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.196282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.196493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.196513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.196703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.196721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.196891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.196909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.197121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.197139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.197374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.197392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.197481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.197499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.197640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.197658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.197798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.197816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.197965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.197984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.198172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.198189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.198356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.198373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.198582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.198600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.198745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.198764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.198872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.198892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.199129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.199147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.199398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.199416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.199647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.199665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.199873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.199890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.200096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.200113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.200269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.200286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.200445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.200469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.200637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.200655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.200801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.200817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.200978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.200996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.201228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.201246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.201483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.201501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.201645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.201663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.201830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.201847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.202105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.202122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.202329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.202347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.202556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.202575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.202734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.202753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.202985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.203002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.203207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.203226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.203387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.203404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.203587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.203605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.203769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.203786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.203869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.203885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.204062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.204079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.204173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.204188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.204348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.204369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.204544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.204564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.204708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.204725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.204948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.204965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.205115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.205133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.205282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.205300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.205406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.205425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.205606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.205624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.205725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.205742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.205908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.205925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.206071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.206089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.206185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.206201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.206375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.206393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.206496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.206512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.206670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.206687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.206829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.206846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.206930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.206947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.207092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.207109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.207315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.207334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.011 [2024-12-13 09:37:32.207481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.011 [2024-12-13 09:37:32.207499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.011 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.207653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.207671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.207822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.207840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.207978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.207995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.208178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.208197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.208282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.208297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.208438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.208460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.208632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.208649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.208870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.208887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.209047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.209065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.209294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.209312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.209405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.209423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.209596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.209613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.209710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.209727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.209937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.209954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.210133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.210149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.210321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.210340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.210492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.210511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.210611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.210627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.210842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.210860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.211027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.211045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.211192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.211210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.211357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.211378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.211599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.211619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.211822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.211840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.211937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.211955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.212098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.212115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.212321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.212338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.212512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.212532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.212631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.212649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.212809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.212826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.212934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.212952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.213059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.213077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.213232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.213249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.213465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.213483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.213692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.213710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.213879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.213896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.214009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.214027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.214221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.214238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.214457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.214476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.214572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.214589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.214819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.214836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.215043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.215059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.215227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.215244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.215473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.215492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.215724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.215742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.215824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.215839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.216044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.216063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.216284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.216301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.216541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.216563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.216769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.216785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.216973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.216991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.217165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.217182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.217398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.217415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.217631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.217649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.217870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.217888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.218040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.218058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.218273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.218291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.218457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.218476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.218624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.218641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.218873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.218891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.219046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.219064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.219273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.219289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.219445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.219469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.219548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.219565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.012 [2024-12-13 09:37:32.219740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.012 [2024-12-13 09:37:32.219757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.012 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.219930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.219947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.220128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.220146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.220348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.220366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.220543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.220562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.220764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.220781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.220941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.220960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.221166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.221184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.221326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.221343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.221485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.221503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.221663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.221681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.221911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.221929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.222024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.222042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.222237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.222254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.222406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.222423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.222679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.222698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.222867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.222885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.223039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.223056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.223295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.223313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.223545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.223563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.223748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.223766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.223864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.223881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.224114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.224132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.224225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.224243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.224482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.224500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.224592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.224612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.224777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.224797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.224953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.224972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.225146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.225165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.225264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.225283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.225361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.225377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.225549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.225567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.225787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.225805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.225957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.225973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.226135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.226152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.226313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.226331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.226484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.226501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.226604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.226621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.226843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.226860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.227038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.227057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.227252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.227269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.227416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.227434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.227590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.227610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.227769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.227788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.228012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.228030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.228192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.228211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.228294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.228310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.228493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.228510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.228608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.228626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.228774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.228792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.229002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.229020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.229253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.229271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.229459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.229483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.229630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.229648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.229878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.229896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.230042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.230060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.230212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.230229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.230436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.230459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.230567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.230585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.230756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.230774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.230918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.230937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.231099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.231116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.231322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.231340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.231434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.013 [2024-12-13 09:37:32.231455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.013 qpair failed and we were unable to recover it. 00:26:20.013 [2024-12-13 09:37:32.231712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.231730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.231810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.231826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.232039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.232056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.232232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.232249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.232415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.232433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.232594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.232613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.232706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.232723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.232883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.232901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.233110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.233128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.233369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.233386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.233595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.233614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.233760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.233777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.234018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.234035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.234127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.234143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.234294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.234312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.234419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.234442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.234607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.234626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.234716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.234734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.234964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.234981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.235144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.235161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.235350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.235368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.235591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.235609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.235787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.235804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.236060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.236077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.236234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.236252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.236492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.236511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.236656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.236673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.236764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.236780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.236938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.236959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.237118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.237135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.237375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.237393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.237653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.237671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.237779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.237796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.238040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.238058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.238213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.238230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.238407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.238424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.238571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.238590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.238746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.238762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.238920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.238937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.239090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.239109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.239322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.239339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.239566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.239584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.239849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.239868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.240132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.240149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.240352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.240371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.240476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.240494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.240721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.240740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.240895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.240912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.241097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.241114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.241220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.241238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.241409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.241427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.241515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.241531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.241613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.241630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.241816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.241833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.242004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.242021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.242216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.242247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.242439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.242462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.242557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.242574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.242746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.242764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.242846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.242863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.243097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.243115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.243297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.243316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.243464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.243483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.243578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.243595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.243826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.243844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.244010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.244027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.244127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.244143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.014 [2024-12-13 09:37:32.244353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-12-13 09:37:32.244371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.014 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.244600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.244622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.244835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.244854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.245001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.245018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.245294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.245312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.245491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.245511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.245676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.245694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.245938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.245956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.246190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.246208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.246383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.246400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.246571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.246589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.246687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.246704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.246801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.246818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.246981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.246998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.247234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.247252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.247476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.247494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.247654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.247671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.247840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.247860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.248008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.248025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.248248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.248266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.248428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.248446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.248610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.248628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.248791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.248808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.249058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.249076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.249164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.249180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.249348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.249366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.249512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.249530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.249632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.249650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.249854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.249883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.250034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.250051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.250153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.250173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.250257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.250274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.250501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.250521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.250695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.250712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.250978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.251001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.251155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.251171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.251344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.251360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.251527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.251549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.251754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.251772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.252044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.252063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.252242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.252262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.252452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.252470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.252653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.252671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.252846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.252864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.253074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.253091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.253267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.253284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.253387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.253404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.253611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.253631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.253864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.253881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.253964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.253980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.254137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.254155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.254306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.254324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.254417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.254435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.254620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.254637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.254735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.254754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.254918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.254937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.255212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.255229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.255473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.255492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.255726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.255743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.255912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-12-13 09:37:32.255930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.015 qpair failed and we were unable to recover it. 00:26:20.015 [2024-12-13 09:37:32.256077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.256094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.256247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.256265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.256365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.256382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.256614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.256633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.256784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.256801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.256957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.256974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.257185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.257204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.257373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.257392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.257549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.257570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.257776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.257793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.257869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.257887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.257987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.258003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.258203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.258221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.258385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.258402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.258601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.258620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.258787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.258806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.259006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.259023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.259181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.259198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.259380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.259398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.259637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.259655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.259767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.259785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.259872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.259890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.259982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.259999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.260092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.260108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.260319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.260336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.260436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.260460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.260713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.260730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.260942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.260959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.261104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.261122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.261346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.261363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.261595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.261613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.261816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.261833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.261924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.261940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.262173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.262191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.262416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.262435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.262674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.262692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.262846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.262864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.263101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.263119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.263260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.263277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.263535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.263553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.263730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.263747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.263921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.263940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.264108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.264127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.264374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.264392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.264485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.264503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.264714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.264733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.264983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.265001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.265161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.265177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.265329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.265349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.265597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.265615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.265786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.265803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.266057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.266075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.266221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.266239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.266409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.266427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.266680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.266698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.266925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.266942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.267120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.267137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.267307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.267326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.267534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.267552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.267776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.267794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.268040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.268057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.268214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.268231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.268405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.268422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.268588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.268607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.268756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.268775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.269030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.269047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.269282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.269301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.269458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.269475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.269618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.269635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.016 qpair failed and we were unable to recover it. 00:26:20.016 [2024-12-13 09:37:32.269857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-12-13 09:37:32.269874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.270105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.270123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.270218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.270234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.270374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.270392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.270557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.270575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.270730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.270747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.270910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.270928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.271095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.271113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.271272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.271290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.271514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.271533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.271762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.271780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.271874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.271890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.272137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.272155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.272302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.272319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.272565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.272585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.272668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.272684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.272839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.272855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.273069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.273087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.273243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.273261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.273413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.273434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.273628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.273651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.273839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.273857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.274089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.274107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.274350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.274368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.274547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.274567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.274736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.274754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.274907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.274926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.275112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.275129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.275340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.275359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.275569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.275588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.275746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.275764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.275995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.276012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.276236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.276253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.276501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.276520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.276748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.276766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.276920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.276938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.277033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.277050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.277197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.277214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.277317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.277335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.277491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.277510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.277666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.277683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.277768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.277785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.277954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.277971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.278153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.278170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.278394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.278411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.278639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.278659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.278819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.278840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.279079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.279096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.279246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.279264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.279426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.279443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.279610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.279627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.279780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.279797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.280025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.280043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.280214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.280231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.280495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.280513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.280666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.280683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.280766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.280783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.281003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.281020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.281280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.281297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.281507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.281526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.281673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.281707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.282001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.282022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.282134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.282152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.282242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.282259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.282346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.282363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.282465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.282482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.282641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.282660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.282870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.282889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.283061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.283078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.283257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.283275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.283435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.283459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.017 qpair failed and we were unable to recover it. 00:26:20.017 [2024-12-13 09:37:32.283554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.017 [2024-12-13 09:37:32.283570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.283731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.283751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.283933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.283953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.284178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.284195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.284358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.284376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.284528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.284546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.284641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.284658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.284749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.284767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.284860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.284876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.285028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.285046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.285257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.285274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.285459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.285477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.285636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.285654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.285819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.285837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.285982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.285999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.286207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.286224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.286383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.286403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.286566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.286584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.286790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.286807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.286968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.286985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.287165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.287183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.287362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.287381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.287525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.287542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.287753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.287770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.287937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.287954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.288132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.288149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.288378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.288395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.288554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.288573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.288648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.288664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.288806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.288826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.289058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.289075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.289213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.289230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.289409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.289426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.289581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.289601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.289874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.289891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.290064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.290081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.290241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.290258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.290474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.290492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.290587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.290603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.290858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.290875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.291093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.291111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.291351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.291368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.291539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.291557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.291768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.291787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.292016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.292033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.292292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.292310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.292464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.292482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.292593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.292611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.292842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.292859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.293089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.293107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.293266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.293284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.293422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.293439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.293663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.293682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.293837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.293854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.294086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.294104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.294260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.294277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.294442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.294465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.294639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.294657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.294807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.294825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.295047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.295066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.295216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.295235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.295402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.295421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.295594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.295612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.295838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.295855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.296009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.296026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.296236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.018 [2024-12-13 09:37:32.296254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.018 qpair failed and we were unable to recover it. 00:26:20.018 [2024-12-13 09:37:32.296492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.296510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.296671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.296688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.296862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.296879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.297053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.297072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.297225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.297242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.297384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.297402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.297496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.297512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.297747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.297765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.298005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.298022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.298251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.298270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.298524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.298544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.298780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.298797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.298893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.298910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.299070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.299087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.299179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.299194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.299356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.299374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.299533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.299552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.299787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.299804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.300034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.300053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.300207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.300226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.300326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.300345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.300576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.300594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.300750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.300769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.301002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.301020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.301192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.301209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.301372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.301390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.301573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.301591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.301774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.301792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.301961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.301979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.302139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.302156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.302345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.302369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.302542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.302562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.302673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.302690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.302934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.302952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.303184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.303202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.303342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.303360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.303518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.303537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.303697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.303714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.303881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.303898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.304071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.304088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.304235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.304253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.304358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.304375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.304605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.304624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.304774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.304791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.304894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.304912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.305055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.305071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.305171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.305189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.305336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.305355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.305517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.305535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.305716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.305733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.305964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.305981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.306174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.306191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.306436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.306459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.306604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.306623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.306809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.306827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.307009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.307026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.307187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.307205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.307438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.307463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.307559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.307576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.307654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.307670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.307823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.307841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.307923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.307938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.308193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.308211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.308372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.308389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.308630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.308648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.308891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.308908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.309143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.309159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.309305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.309321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.309529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.309548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.309686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.309703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.309912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.309929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.310070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.310089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.310264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.310282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.310493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.310511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.019 qpair failed and we were unable to recover it. 00:26:20.019 [2024-12-13 09:37:32.310669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.019 [2024-12-13 09:37:32.310689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.310919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.310937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.311165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.311183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.311337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.311354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.311510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.311528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.311756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.311773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.311955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.311971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.312071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.312091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.312267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.312286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.312436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.312459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.312634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.312655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.312804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.312822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.312912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.312928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.313154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.313171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.313344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.313361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.313505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.313523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.313679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.313696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.313862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.313880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.314039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.314056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.314196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.314215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.314421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.314438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.314667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.314684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.314846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.314863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.315015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.315031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.315139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.315157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.315320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.315337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.315520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.315538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.315692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.315710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.315855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.315872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.316094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.316112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.316264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.316281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.316543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.316562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.316801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.316818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.316993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.317011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.317219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.317237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.317495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.317514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.317670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.317687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.317897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.317914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.318116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.318134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.318219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.318235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.318465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.318484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.318665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.318683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.318834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.318851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.319000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.319017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.319250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.319268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.319420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.319438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.319608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.319625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.319856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.319873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.320046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.320063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.320211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.320228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.320373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.320391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.320543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.320564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.320657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.320673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.320853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.320870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.321020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.321038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.321144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.321161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.321390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.321407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.321547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.321565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.321732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.321749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.321920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.321938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.322093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.322110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.322341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.322359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.322625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.322643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.322794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.322811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.322970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.322988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.323166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.323183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.323394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.323411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.323589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.020 [2024-12-13 09:37:32.323607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.020 qpair failed and we were unable to recover it. 00:26:20.020 [2024-12-13 09:37:32.323821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.323837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.323926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.323942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.324087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.324104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.324201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.324216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.324385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.324403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.324612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.324632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.324789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.324805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.324958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.324976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.325121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.325138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.325366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.325384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.325552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.325570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.325784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.325801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.325989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.326006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.326236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.326253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.326511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.326530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.326609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.326625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.326864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.326882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.326992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.327009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.327150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.327168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.327326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.327344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.327605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.327624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.327725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.327742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.327834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.327850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.328087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.328104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.328145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb200f0 (9): Bad file descriptor 00:26:20.021 [2024-12-13 09:37:32.328411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.328445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.328640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.328658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.328830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.328847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.329079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.329100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.329194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.329210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.329411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.329430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.329532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.329551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.329735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.329753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.329914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.329931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.330101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.330119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.330282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.330299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.330466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.330484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.330745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.330764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.330869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.330889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.331048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.331066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.331301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.331320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.331536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.331555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.331700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.331717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.331950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.331967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.332179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.332197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.332354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.332371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.332529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.332549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.332717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.332735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.332881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.332900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.333057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.333074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.333254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.333272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.333444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.333470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.333615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.333633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.333855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.333873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.334036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.334054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.334235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.334253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.334413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.334431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.334571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.334592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.334778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.334797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.335031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.335048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.021 [2024-12-13 09:37:32.335206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.021 [2024-12-13 09:37:32.335224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.021 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.335306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.335322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.335476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.335496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.335603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.335619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.335768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.335785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.336014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.336032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.336129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.336151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.336230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.336247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.336496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.336514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.336699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.336717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.336949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.336966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.337141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.337158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.337319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.337337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.337494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.337512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.337717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.337734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.337834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.337852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.338084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.338101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.338336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.338353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.338587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.338611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.338819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.338837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.338922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.338939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.339019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.339035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.339199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.339216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.339360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.339378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.339633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.339652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.339827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.339846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.340003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.340021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.340181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.340198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.340354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.340371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.340577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.340595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.340691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.340707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.340953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.340971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.341199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.341217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.341313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.341330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.341589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.341609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.341761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.341779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.341934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.341952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.342091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.342108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.342264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.342281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.342434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.342456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.342570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.342587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.342793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.342810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.343015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.343034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.343265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.343283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.022 [2024-12-13 09:37:32.343435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.022 [2024-12-13 09:37:32.343461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.022 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.343677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.343695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.343928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.343945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.344094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.344112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.344254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.344271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.344420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.344437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.344676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.344694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.344900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.344918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.345158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.345175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.345402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.345421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.345634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.345653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.345893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.345910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.346073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.346090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.346232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.346250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.346468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.346489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.346720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.346738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.346995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.347012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.347167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.347185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.347428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.347446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.347622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.347639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.307 qpair failed and we were unable to recover it. 00:26:20.307 [2024-12-13 09:37:32.347801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.307 [2024-12-13 09:37:32.347819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.347916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.347933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.348208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.348240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.348477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.348498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.348710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.348728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.348888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.348906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.349083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.349100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.349336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.349353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.349470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.349488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.349717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.349734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.349916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.349933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.350080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.350098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.350352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.350369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.350601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.350621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.350764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.350783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.350943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.350961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.351076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.351094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.351199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.351217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.351360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.351378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.351533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.351552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.351651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.351669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.351761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.351782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.351891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.351909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.351994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.352011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.352161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.352179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.352254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.352269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.352459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.352477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.352634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.352652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.352895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.352914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.353016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.353035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.353129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.353146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.353306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.353325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.353568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.353586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.353739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.353757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.353937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.353955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.354201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.354220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.354392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.354410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.354496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.354513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.354668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.354686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.308 qpair failed and we were unable to recover it. 00:26:20.308 [2024-12-13 09:37:32.354794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.308 [2024-12-13 09:37:32.354813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.355021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.355040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.355162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.355179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.355367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.355384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.355576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.355595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.355705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.355723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.355814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.355831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.356034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.356054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.356202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.356220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.356382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.356401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.356523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.356545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.356712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.356729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.356840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.356859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.357041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.357057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.357224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.357242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.357461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.357481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.357576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.357593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.357750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.357768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.357930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.357949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.358122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.358141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.358375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.358393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.358493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.358510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.358665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.358683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.358813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.358833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.358980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.358999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.359190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.359207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.359364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.359381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.359558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.359576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.359676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.359693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.359834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.359853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.360080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.360099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.360355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.360373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.360464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.360482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.360623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.360640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.360820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.360837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.360929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.360945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.361211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.361231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.361317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.361334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.309 qpair failed and we were unable to recover it. 00:26:20.309 [2024-12-13 09:37:32.361547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.309 [2024-12-13 09:37:32.361567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.361666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.361683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.361865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.361882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.361987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.362003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.362154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.362171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.362266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.362283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.362439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.362463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.362697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.362714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.362806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.362822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.362914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.362930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.363176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.363193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.363329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.363347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.363438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.363465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.363548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.363565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.363708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.363728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.363839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.363856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.363945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.363962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.364048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.364065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.364157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.364173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.364383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.364401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.364558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.364577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.364668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.364685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.364829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.364846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.365013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.365031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.365193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.365211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.365355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.365385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.365571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.365589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.365734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.365752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.365986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.366003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.366166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.366184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.366266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.366282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.366482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.366501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.366710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.366729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.366834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.366852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.367108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.367126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.367301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.367319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.367588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.367608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.367764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.367782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.367959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.367977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.310 qpair failed and we were unable to recover it. 00:26:20.310 [2024-12-13 09:37:32.368132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.310 [2024-12-13 09:37:32.368151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.368311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.368329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.368479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.368498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.368660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.368678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.368832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.368849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.369000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.369017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.369103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.369120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.369273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.369291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.369458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.369477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.369652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.369669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.369831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.369849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.370056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.370074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.370158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.370175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.370357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.370376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.370529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.370547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.370773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.370791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.370883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.370899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.371147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.371164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.371335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.371353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.371511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.371530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.371620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.371636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.371785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.371804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.371949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.371966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.372237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.372255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.372501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.372519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.372628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.372646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.372727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.372747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.372892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.372911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.373146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.373164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.373321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.373339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.373488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.373506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.373651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.373669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.373829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.373846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.374066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.374083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.311 [2024-12-13 09:37:32.374258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.311 [2024-12-13 09:37:32.374276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.311 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.374365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.374382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.374572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.374590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.374745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.374763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.374870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.374888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.375058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.375076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.375220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.375239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.375321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.375338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.375423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.375444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.375546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.375564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.375710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.375728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.375841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.375859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.376004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.376023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.376164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.376182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.376298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.376316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.376414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.376430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.376524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.376546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.376715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.376734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.376832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.376849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.377007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.377026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.377128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.377148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.377238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.377255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.377356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.377374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.377458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.377475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.377563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.377580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.377734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.377752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.377897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.377914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.378075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.378094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.378240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.378258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.378329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.378345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.378417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.378434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.378530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.378550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.378648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.378670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.378786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.378803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.378992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.379011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.379105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.312 [2024-12-13 09:37:32.379123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.312 qpair failed and we were unable to recover it. 00:26:20.312 [2024-12-13 09:37:32.379268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.379285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.379385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.379403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.379496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.379513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.379588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.379604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.379692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.379709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.379860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.379878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.379969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.379987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.380197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.380215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.380304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.380321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.380479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.380498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.380599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.380618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.380709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.380728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.380828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.380846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.380933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.380949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.381182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.381200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.381437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.381460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.381546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.381563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.381646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.381664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.381762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.381780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.381930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.381949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.382030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.382045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.382130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.382146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.382231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.382247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.382405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.382427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.382514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.382532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.382670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.382688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.382895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.382914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.383053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.383070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.383165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.383180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.383260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.383277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.383362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.383378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.383470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.383488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.383568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.383585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.383679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.383696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.383785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.383803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.383962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.383980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.384074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.384092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.384192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.384209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.384369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.384387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.313 [2024-12-13 09:37:32.384488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.313 [2024-12-13 09:37:32.384507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.313 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.384589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.384605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.384813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.384831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.384927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.384945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.385097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.385122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.385207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.385223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.385398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.385416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.385540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.385558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.385812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.385830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.385986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.386003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.386084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.386102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.386290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.386308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.386386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.386403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.386505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.386524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.386602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.386619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.386713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.386730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.386830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.386847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.387009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.387027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.387196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.387213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.387382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.387400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.387499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.387517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.387606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.387624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.387724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.387744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.387843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.387861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.387963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.387984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.388215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.388233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.388325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.388342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.388421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.388440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.388535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.388552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.388637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.388655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.388768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.388785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.388897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.388915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.388979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.388995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.389147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.389165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.389256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.389274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.389363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.389381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.389536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.389554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.389711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.389729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.314 [2024-12-13 09:37:32.389883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.314 [2024-12-13 09:37:32.389902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.314 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.390002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.390020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.390173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.390191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.390279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.390296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.390455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.390473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.390568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.390586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.390765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.390782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.390867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.390885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.390962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.390979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.391142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.391160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.391241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.391260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.391353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.391371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.391522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.391540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.391691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.391710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.391802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.391820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.392026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.392043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.392129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.392147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.392229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.392247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.392343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.392360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.392437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.392460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.392541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.392557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.392709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.392727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.392817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.392834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.392993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.393011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.393110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.393127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.393234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.393252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.393396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.393416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.393584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.393602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.393689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.393706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.393783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.393800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.393949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.393968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.394051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.394069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.394169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.394187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.394353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.394371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.394456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.394474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.394626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.394645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.394723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.394741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.394894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.394912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.315 [2024-12-13 09:37:32.395003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.315 [2024-12-13 09:37:32.395020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.315 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.395113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.395131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.395288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.395305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.395404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.395421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.395507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.395525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.395604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.395621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.395768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.395785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.395894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.395911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.396063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.396081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.396161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.396179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.396329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.396346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.396433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.396457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.396556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.396574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.396827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.396844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.397007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.397025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.397136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.397162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.397330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.397350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.397508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.397526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.397672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.397689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.397838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.397855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.397942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.397960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.398062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.398080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.398307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.398324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.398602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.398620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.398697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.398715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.398804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.398821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.398908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.398926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.399070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.399087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.399265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.399286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.399371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.399388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.316 qpair failed and we were unable to recover it. 00:26:20.316 [2024-12-13 09:37:32.399534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.316 [2024-12-13 09:37:32.399552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.399725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.399743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.399843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.399861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.399942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.399959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.400028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.400045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.400186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.400204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.400415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.400433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.400660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.400678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.400829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.400847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.401083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.401101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.401280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.401298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.401379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.401397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.401596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.401615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.401703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.401721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.401825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.401842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.402071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.402090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.402247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.402264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.402427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.402445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.402601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.402619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.402761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.402779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.402867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.402884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.402970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.402988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.403149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.403166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.403316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.403333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.403422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.403440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.403604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.403627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.403722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.403742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.403829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.403846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.404008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.404025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.404202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.404220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.404371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.404389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.404495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.404514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.404722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.404740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.404839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.404858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.404938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.404955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.405119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.405137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.405289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.405307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.405401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.405419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.405572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.405591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.317 qpair failed and we were unable to recover it. 00:26:20.317 [2024-12-13 09:37:32.405754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.317 [2024-12-13 09:37:32.405772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.405935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.405953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.406098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.406116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.406344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.406361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.406552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.406570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.406659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.406676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.406783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.406801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.406955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.406973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.407124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.407141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.407292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.407310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.407396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.407413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.407644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.407662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.407761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.407778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.407857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.407878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.407976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.407994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.408096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.408114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.408254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.408271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.408412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.408429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.408518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.408540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.408681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.408699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.408798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.408816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.408916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.408933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.409011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.409028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.409114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.409132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.409328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.409346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.409504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.409521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.409738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.409756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.409906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.409923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.410075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.410094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.410195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.410213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.410316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.410334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.410478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.410497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.410714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.410732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.410874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.410892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.410982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.411000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.411082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.411099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.411256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.411274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.411370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.411389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.411617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.411636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.318 [2024-12-13 09:37:32.411716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.318 [2024-12-13 09:37:32.411733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.318 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.411814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.411834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.411986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.412003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.412171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.412188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.412285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.412302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.412389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.412406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.412560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.412578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.412732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.412749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.412903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.412920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.413003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.413021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.413227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.413244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.413389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.413407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.413620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.413638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.413747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.413765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.413924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.413946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.414028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.414045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.414189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.414208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.414361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.414378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.414614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.414631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.414830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.414848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.415125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.415142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.415282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.415299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.415572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.415590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.415750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.415769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.415855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.415871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.416028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.416047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.416141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.416159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.416373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.416391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.416500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.416519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.416748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.416766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.416850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.416868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.417116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.417133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.417290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.417307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.417479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.417498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.417652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.417670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.417885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.417903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.418132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.418149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.418362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.418379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.418594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.418613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.319 [2024-12-13 09:37:32.418886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.319 [2024-12-13 09:37:32.418904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.319 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.418991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.419008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.419217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.419235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.419459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.419478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.419560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.419576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.419808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.419826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.419922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.419941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.420162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.420181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.420324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.420341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.420453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.420471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.420566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.420584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.420804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.420821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.421059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.421077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.421314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.421332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.421490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.421509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.421594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.421612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.421760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.421778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.421921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.421938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.422215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.422233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.422465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.422483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.422718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.422737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.422891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.422909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.423062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.423080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.423310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.423328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.423487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.423506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.423671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.423689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.423927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.423944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.424127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.424144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.424293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.424310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.424488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.424506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.424665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.424683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.424959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.424977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.425136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.425153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.425325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.425342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.425432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.425458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.425614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.425632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.320 qpair failed and we were unable to recover it. 00:26:20.320 [2024-12-13 09:37:32.425889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.320 [2024-12-13 09:37:32.425907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.426002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.426019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.426197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.426214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.426361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.426380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.426627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.426645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.426749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.426765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.426950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.426972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.427181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.427199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.427365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.427382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.427533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.427551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.427696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.427715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.427944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.427962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.428113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.428131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.428308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.428326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.428485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.428505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.428737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.428754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.428847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.428864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.428950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.428966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.429071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.429089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.429187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.429205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.429364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.429381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.429483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.429502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.429662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.429680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.429832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.429849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.429998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.430016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.430169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.430186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.430333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.430353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.430463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.430482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.430625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.430643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.430852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.430869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.431012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.431031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.321 [2024-12-13 09:37:32.431182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.321 [2024-12-13 09:37:32.431199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.321 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.431412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.431429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.431590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.431608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.431697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.431713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.431861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.431879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.432022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.432039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.432247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.432265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.432416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.432433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.432574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.432596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.432784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.432802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.432973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.432990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.433279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.433297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.433495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.433515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.433694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.433712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.433861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.433878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.434045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.434068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.434223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.434240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.434474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.434494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.434717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.434735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.434884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.434902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.435058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.435077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.435245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.435263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.435437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.435462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.435605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.435622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.435870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.435888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.436091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.436109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.436342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.436359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.436548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.436566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.436729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.436746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.436912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.436930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.437033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.437050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.437203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.437221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.437312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.437330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.437490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.437508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.437663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.437680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.437820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.437838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.438049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.438068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.438220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.438237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.322 [2024-12-13 09:37:32.438457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.322 [2024-12-13 09:37:32.438475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.322 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.438648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.438665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.442581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.442600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.442805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.442824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.443007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.443025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.443197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.443214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.443392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.443410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.443582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.443600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.443823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.443841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.444076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.444094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.444252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.444270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.444458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.444476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.444637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.444655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.444814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.444833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.444941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.444961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.445169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.445193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.445267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.445283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.445468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.445487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.445646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.445669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.445878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.445896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.445977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.445993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.446105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.446123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.446293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.446310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.446389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.446406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.446622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.446640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.446872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.446890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.447146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.447165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.447325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.447343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.447540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.447558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.447716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.447733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.447913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.447930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.448169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.448187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.448343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.448361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.448527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.448546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.448778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.448796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.449010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.449027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.449183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.449201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.449347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.449365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.323 [2024-12-13 09:37:32.449568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.323 [2024-12-13 09:37:32.449586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.323 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.449745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.449762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.449969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.449986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.450178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.450195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.450374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.450392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.450616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.450635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.450725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.450741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.450969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.450990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.451096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.451112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.451294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.451312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.451410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.451425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.451516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.451532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.451743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.451760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.451838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.451854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.451991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.452008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.452238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.452256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.452330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.452347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.452569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.452588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.452670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.452686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.452827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.452845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.453096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.453114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.453347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.453365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.453519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.453537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.453700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.453719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.453867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.453884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.454071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.454088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.454294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.454311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.454456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.454474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.454646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.454664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.454839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.454856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.454960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.454977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.455061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.455077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.455321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.455340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.455496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.455514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.455591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.455607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.455764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.455781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.455938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.455955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.456193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.456211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.456482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.456500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.324 qpair failed and we were unable to recover it. 00:26:20.324 [2024-12-13 09:37:32.456659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.324 [2024-12-13 09:37:32.456676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.456857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.456878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.456966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.456982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.457142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.457159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.457322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.457339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.457544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.457563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.457667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.457686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.457896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.457916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.458108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.458127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.458285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.458305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.458466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.458484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.458673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.458691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.458851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.458869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.459045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.459062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.459138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.459154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.459380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.459397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.459556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.459574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.459732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.459749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.459852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.459869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.459960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.459976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.460073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.460090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.460174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.460190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.460336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.460353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.460468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.460486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.460743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.460761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.460970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.460988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.461151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.461168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.461333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.461350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.461458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.461476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.461717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.461734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.461821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.461837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.461986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.462003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.462169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.462186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.462344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.462361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.462579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.325 [2024-12-13 09:37:32.462597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.325 qpair failed and we were unable to recover it. 00:26:20.325 [2024-12-13 09:37:32.462783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.462801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.462984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.463004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.463169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.463187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.463328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.463345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.463509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.463527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.463688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.463706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.463911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.463928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.464030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.464048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.464250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.464268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.464428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.464446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.464601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.464619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.464698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.464714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.464882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.464899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.465050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.465067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.465169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.465187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.465362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.465384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.465477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.465493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.465602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.465620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.465718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.465734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.465884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.465901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.466107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.466124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.466298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.466316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.466497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.466515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.466636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.466653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.466815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.466832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.466973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.466991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.467086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.467103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.467251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.467269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.467428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.467455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.467610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.467628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.467835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.467853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.468014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.468031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.468259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.468276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.468357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.468373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.468540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.468558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.468717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.326 [2024-12-13 09:37:32.468734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.326 qpair failed and we were unable to recover it. 00:26:20.326 [2024-12-13 09:37:32.468899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.468916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.469025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.469042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.469128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.469144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.469295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.469313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.469458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.469476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.469616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.469633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.469776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.469793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.469895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.469913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.470138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.470155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.470315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.470333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.470488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.470505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.470611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.470628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.470722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.470739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.470964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.470982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.471156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.471174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.471399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.471417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.471633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.471652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.471758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.471775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.471921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.471939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.472048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.472066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.472274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.472292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.472454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.472473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.472634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.472652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.472753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.472771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.472927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.472945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.473203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.473221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.473433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.473457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.473603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.473621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.473781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.473800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.473951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.473968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.474137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.474154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.474313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.474331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.474502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.474524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.474676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.474693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.474882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.474900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.475132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.475150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.475289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.475306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.327 qpair failed and we were unable to recover it. 00:26:20.327 [2024-12-13 09:37:32.475410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.327 [2024-12-13 09:37:32.475428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.475662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.475680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.475832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.475850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.475947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.475964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.476254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.476272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.476366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.476382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.476645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.476663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.476764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.476781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.476977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.476994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.477183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.477201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.477305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.477323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.477580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.477598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.477722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.477740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.477837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.477854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.478111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.478129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.478380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.478398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.478565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.478584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.478738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.478756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.478933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.478951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.479040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.479057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.479207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.479224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.479328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.479345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.479498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.479521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.479683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.479700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.479939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.479957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.480152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.480170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.480386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.480404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.480552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.480570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.480721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.480738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.480897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.480915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.481074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.481091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.481274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.481292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.481444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.481468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.481626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.481644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.481832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.481849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.482003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.482024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.482185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.482202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.482390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.482407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.482629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.328 [2024-12-13 09:37:32.482648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.328 qpair failed and we were unable to recover it. 00:26:20.328 [2024-12-13 09:37:32.482795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.482812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.482963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.482980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.483180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.483198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.483373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.483390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.483552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.483570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.483764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.483781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.484000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.484018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.484317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.484334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.484591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.484608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.484784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.484802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.485015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.485033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.485238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.485255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.485409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.485427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.485533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.485551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.485710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.485729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.485897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.485915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.486072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.486089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.486192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.486209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.486363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.486380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.486521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.486540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.486696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.486713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.486863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.486881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.487146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.487165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.487404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.487423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.487596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.487614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.487697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.487712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.487883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.487900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.488126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.488144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.488372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.488389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.488552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.488570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.488802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.488821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.488983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.489001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.489155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.489173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.489370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.489388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.489545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.489564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.489819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.489836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.490098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.490116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.329 [2024-12-13 09:37:32.490328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.329 [2024-12-13 09:37:32.490347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.329 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.490522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.490541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.490688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.490706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.490865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.490883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.490977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.490993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.491150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.491168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.491343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.491361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.491459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.491476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.491638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.491656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.491751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.491768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.491846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.491862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.491969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.491986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.492086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.492102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.492265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.492286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.492467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.492486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.492578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.492594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.492690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.492708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.492891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.492908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.493051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.493068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.493285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.493303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.493392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.493409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.493680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.493698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.493825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.493843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.493995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.494013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.494168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.494186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.494359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.494376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.494552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.494571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.494774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.494791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.494948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.494966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.495120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.495137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.495281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.495298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.495489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.495507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.495677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.495694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.330 [2024-12-13 09:37:32.495883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.330 [2024-12-13 09:37:32.495900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.330 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.496009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.496028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.496175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.496193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.496368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.496385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.496548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.496566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.496742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.496759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.496865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.496883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.497033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.497051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.497318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.497336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.497480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.497498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.497616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.497634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.497792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.497810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.497995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.498012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.498214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.498231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.498464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.498494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.498654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.498671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.498903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.498921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.499015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.499032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.499239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.499257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.499344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.499360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.499572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.499590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.499786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.499808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.499922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.499940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.500119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.500137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.500389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.500408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.500560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.500579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.500774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.500792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.500878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.500894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.500997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.501015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.501284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.501302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.501536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.501554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.501714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.501732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.501889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.501906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.502173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.502191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.502385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.502403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.502556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.502575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.502745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.502762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.502967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.502985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.503180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.503197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.503400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.331 [2024-12-13 09:37:32.503417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.331 qpair failed and we were unable to recover it. 00:26:20.331 [2024-12-13 09:37:32.503545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.503563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.503747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.503764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.503923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.503941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.504034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.504051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.504257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.504274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.504373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.504390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.504567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.504585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.504741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.504759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.504921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.504939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.505052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.505069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.505223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.505241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.505389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.505407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.505689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.505708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.505864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.505881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.505976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.505992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.506216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.506233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.506440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.506462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.506622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.506640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.506873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.506890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.507071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.507088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.507322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.507340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.507616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.507638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.507726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.507744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.507928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.507946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.508136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.508154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.508334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.508352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.508510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.508529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.508688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.508705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.508804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.508822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.508993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.509010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.509184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.509202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.509344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.509363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.509503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.509522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.509694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.509712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.509887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.509905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.510020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.510038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.510229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.510246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.332 [2024-12-13 09:37:32.510405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.332 [2024-12-13 09:37:32.510423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.332 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.510557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.510575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.510733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.510750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.510862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.510880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.510978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.510996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.511096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.511112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.511255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.511273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.511426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.511444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.511556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.511574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.511684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.511701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.511782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.511798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.511981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.511999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.512094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.512111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.512194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.512210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.512298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.512315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.512393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.512409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.512614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.512632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.512891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.512909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.513080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.513098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.513334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.513352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.513444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.513465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.513577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.513594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.513752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.513769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.513996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.514014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.514207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.514230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.514459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.514477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.514638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.514655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.514760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.514777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.514882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.514899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.514988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.515004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.515265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.515283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.515436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.515463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.515660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.515678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.515841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.515858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.516047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.516064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.516283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.516300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.516567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.516586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.516725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.516743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.516915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.516932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.333 [2024-12-13 09:37:32.517095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.333 [2024-12-13 09:37:32.517113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.333 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.517376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.517393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.517645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.517663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.517764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.517782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.517960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.517978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.518185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.518204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.518435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.518457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.518620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.518637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.518800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.518817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.519003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.519021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.519184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.519202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.519431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.519453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.519621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.519639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.519793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.519811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.520008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.520026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.520274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.520293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.520462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.520480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.520709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.520726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.520984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.521002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.521245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.521263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.521495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.521513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.521674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.521692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.521898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.521916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.522082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.522100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.522304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.522322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.522520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.522542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.522637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.522652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.522808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.522825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.523011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.523028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.523192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.523210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.523377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.523395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.523636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.523654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.523754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.523771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.524022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.524040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.524283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.524301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.524529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.524547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.334 qpair failed and we were unable to recover it. 00:26:20.334 [2024-12-13 09:37:32.524701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.334 [2024-12-13 09:37:32.524719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.524871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.524889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.525066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.525084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.525257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.525275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.525486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.525504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.525688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.525706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.525866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.525883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.526109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.526126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.526298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.526315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.526473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.526491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.526655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.526673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.526882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.526899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.527001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.527019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.527199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.527217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.527404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.527422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.527628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.527645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.527878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.527895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.528153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.528170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.528380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.528397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.528496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.528514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.528679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.528696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.528869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.528886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.529044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.529062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.529147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.529163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.529277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.529295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.529571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.529590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.529750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.529769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.529997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.530015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.530287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.530305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.530464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.530485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.530718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.530735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.530836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.530854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.531012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.531030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.531252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.531270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.531502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.531520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.531621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.531639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.531804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.531821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.531908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.335 [2024-12-13 09:37:32.531925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.335 qpair failed and we were unable to recover it. 00:26:20.335 [2024-12-13 09:37:32.532084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.532102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.532255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.532273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.532446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.532474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.532617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.532634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.532753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.532771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.532877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.532895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.533001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.533019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.533205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.533223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.533375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.533393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.533565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.533583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.533676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.533692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.533915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.533932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.534028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.534044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.534136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.534155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.534318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.534336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.534439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.534459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.534636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.534653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.534853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.534871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.535141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.535159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.535373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.535391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.535552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.535570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.535663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.535680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.535904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.535922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.536088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.536105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.536206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.536222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.536363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.536380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.536536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.536554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.536712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.536730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.536836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.536854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.536951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.536970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.537200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.537218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.537381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.537402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.537492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.537509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.537614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.537632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.537736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.537753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.537963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.537981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.538154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.538171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.538345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.538362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.538559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.336 [2024-12-13 09:37:32.538577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.336 qpair failed and we were unable to recover it. 00:26:20.336 [2024-12-13 09:37:32.538735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.538753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.538861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.538879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.539025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.539043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.539288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.539306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.539511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.539529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.539616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.539631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.539726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.539742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.539950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.539968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.540140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.540157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.540249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.540265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.540498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.540517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.540614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.540631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.540784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.540802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.540962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.540980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.541085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.541103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.541200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.541216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.541366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.541383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.541534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.541553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.541711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.541728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.541940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.541959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.542185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.542203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.542278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.542294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.542392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.542409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.542587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.542605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.542744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.542762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.542849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.542865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.542973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.542991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.543088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.543106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.543197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.543213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.543419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.543437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.543555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.543572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.543726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.543743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.543835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.543855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.544022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.544040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.544259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.544277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.544380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.544396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.544593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.544611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.544814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.544831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.544986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.337 [2024-12-13 09:37:32.545003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.337 qpair failed and we were unable to recover it. 00:26:20.337 [2024-12-13 09:37:32.545256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.545274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.545432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.545456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.545652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.545670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.545770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.545786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.545958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.545977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.546184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.546201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.546342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.546360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.546474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.546492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.546696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.546714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.546803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.546819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.546907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.546924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.547178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.547196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.547429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.547451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.547640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.547659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.547820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.547838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.548006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.548024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.548290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.548308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.548470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.548488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.548575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.548591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.548703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.548720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.548914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.548931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.549034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.549051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.549152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.549170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.549424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.549442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.549663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.549681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.549776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.549792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.549900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.549918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.550125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.550142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.550224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.550240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.550388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.550406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.550508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.550525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.550684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.550702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.550801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.550818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.551032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.551054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.551208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.551225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.551433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.551455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.551680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.551698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.551806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.551824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.338 qpair failed and we were unable to recover it. 00:26:20.338 [2024-12-13 09:37:32.552000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.338 [2024-12-13 09:37:32.552017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.552195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.552213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.552364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.552382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.552530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.552548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.552701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.552718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.552829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.552847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.552949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.552967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.553184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.553202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.553368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.553385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.553622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.553640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.553797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.553814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.553915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.553932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.554028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.554046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.554187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.554204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.554439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.554462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.554545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.554562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.554771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.554789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.554948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.554966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.555140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.555158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.555391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.555408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.555578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.555597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.555750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.555768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.555850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.555868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.556026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.556043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.556166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.556183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.556333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.556350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.556509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.556543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.556637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.556655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.556885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.556903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.557140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.557158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.557308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.557325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.557563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.557582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.557694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.557711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.557816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.339 [2024-12-13 09:37:32.557833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.339 qpair failed and we were unable to recover it. 00:26:20.339 [2024-12-13 09:37:32.558040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.558058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.558225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.558246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.558346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.558363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.558524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.558543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.558727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.558745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.558892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.558910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.559004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.559020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.559255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.559273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.559434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.559467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.559625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.559643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.559901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.559918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.560009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.560027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.560195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.560212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.560387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.560404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.560501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.560518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.560616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.560634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.560821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.560839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.561021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.561038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.561149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.561166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.561330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.561348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.561501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.561520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.561679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.561696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.561804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.561822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.561922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.561941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.562184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.562202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.562356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.562375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.562465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.562482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.562578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.562594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.562800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.562832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.562977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.562995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.563134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.563151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.563309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.563326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.563422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.563438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.563557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.563572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.563671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.563683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.563761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.563775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.563979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.564000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.564155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.564172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.564363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.564380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.340 qpair failed and we were unable to recover it. 00:26:20.340 [2024-12-13 09:37:32.564580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.340 [2024-12-13 09:37:32.564603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.564816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.564833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.565090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.565112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.565272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.565291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.565530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.565550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.565717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.565734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.565896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.565915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.566200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.566218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.566453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.566474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.566649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.566668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.566776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.566793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.566893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.566910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.567088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.567106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.567261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.567280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.567456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.567479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.567638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.567657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.567769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.567788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.567997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.568015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.568257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.568274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.568434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.568458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.568655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.568673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.568803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.568820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.569031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.569049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.569195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.569214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.569385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.569403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.569546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.569562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.569726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.569742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.569896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.569916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.570124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.570141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.570382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.570407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.570497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.570514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.570675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.570693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.570767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.570783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.570880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.570898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.571054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.571072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.571234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.571251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.571359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.571377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.571532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.571551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.571717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.571735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.341 qpair failed and we were unable to recover it. 00:26:20.341 [2024-12-13 09:37:32.571919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.341 [2024-12-13 09:37:32.571936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.572019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.572035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.572246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.572264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.572417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.572434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.572593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.572611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.572786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.572803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.572920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.572937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.573043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.573061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.573220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.573237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.573410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.573428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.573632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.573650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.573873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.573891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.574040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.574058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.574206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.574224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.574456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.574473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.574576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.574593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.574799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.574817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.575069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.575093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.575350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.575368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.575528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.575547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.575730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.575748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.575852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.575869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.575978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.575996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.576091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.576107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.576250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.576267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.576474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.576493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.576715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.576732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.576885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.576903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.576998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.577016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.577154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.577173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.577381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.577399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.577497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.577515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.577723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.577741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.577949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.577967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3480400 Killed "${NVMF_APP[@]}" "$@" 00:26:20.342 [2024-12-13 09:37:32.578072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.578092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.578233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.578250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.578460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.578479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.578633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.578651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:20.342 [2024-12-13 09:37:32.578812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.578831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.342 qpair failed and we were unable to recover it. 00:26:20.342 [2024-12-13 09:37:32.578990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.342 [2024-12-13 09:37:32.579007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.579112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.579132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:20.343 [2024-12-13 09:37:32.579393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.579411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:20.343 [2024-12-13 09:37:32.579575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.579595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.579693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.579710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.579814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.579832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:20.343 [2024-12-13 09:37:32.579935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.579953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.343 [2024-12-13 09:37:32.580215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.580235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.580331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.580348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.580438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.580459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.580621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.580639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.580805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.580823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.580914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.580932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.581033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.581050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.581195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.581214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.581405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.581423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.581655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.581677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.581791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.581809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.581917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.581934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.582057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.582075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.582182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.582200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.582280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.582296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.582651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.582673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.582773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.582792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.582941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.582959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.583120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.583137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.583296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.583314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.583464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.583483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.583594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.583612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.583773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.583791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.583901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.583925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.584122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.584141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.584304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.584323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.584413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.343 [2024-12-13 09:37:32.584430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.343 qpair failed and we were unable to recover it. 00:26:20.343 [2024-12-13 09:37:32.584531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.584549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.584650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.584667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.584916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.584932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.585164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.585182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.585392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.585409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.585547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.585564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.585743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.585761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3481284 00:26:20.344 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:20.344 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3481284 00:26:20.344 [2024-12-13 09:37:32.587016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.587053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.587309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.587328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3481284 ']' 00:26:20.344 [2024-12-13 09:37:32.587483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.587504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.587660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.587678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.344 [2024-12-13 09:37:32.587793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.587813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.587972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.587992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.344 [2024-12-13 09:37:32.588077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.588096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.588306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.588325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.344 [2024-12-13 09:37:32.588484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.588504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.344 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.344 [2024-12-13 09:37:32.589374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.589406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.589592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.589612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.589829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.589851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.589965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.589981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.590256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.590273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.590436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.590458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.590629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.590649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.590764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.590781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.590941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.590958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.591063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.591079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.591165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.591184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.591338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.591354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.591549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.591566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.591722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.591741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.591853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.591869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.591978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.591998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.592222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.592239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.592432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.592455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.344 [2024-12-13 09:37:32.592619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.344 [2024-12-13 09:37:32.592636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.344 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.592743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.592759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.592912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.592928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.593031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.593047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.593276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.593292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.593446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.593471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.593577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.593593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.593683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.593699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.593860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.593877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.593984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.593999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.594186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.594203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.594435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.594456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.594544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.594561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.594661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.594676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.594865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.594881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.594959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.594974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.595056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.595073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.595252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.595269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.595477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.595495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.595596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.595611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.595718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.595735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.595852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.595868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.595972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.595987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.596142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.596159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.596395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.596411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.596526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.596543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.596631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.596649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.596749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.596768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.596940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.596959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.597191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.597207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.597505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.597524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.597665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.597681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.597840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.597856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.597959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.597976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.598084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.598101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.598184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.598201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.598295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.598311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.598459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.598480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.598634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.598651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.345 [2024-12-13 09:37:32.599531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.345 [2024-12-13 09:37:32.599561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.345 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.599751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.599770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.599921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.599937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.600043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.600060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.600223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.600242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.600419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.600436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.600559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.600576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.600678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.600695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.600859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.600874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.600961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.600976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.601184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.601201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.601343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.601359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.601437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.601459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.601683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.601700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.601853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.601869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.602091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.602109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.602342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.602360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.602522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.602539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.602643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.602658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.602803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.602820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.602916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.602931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.603163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.603179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.603358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.603376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.603528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.603546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.603626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.603641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.603738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.603753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.603848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.603865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.603975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.603992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.604082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.604097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.604264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.604280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.604431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.604454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.604542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.604556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.604663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.604679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.604905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.604922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.605017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.605032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.605233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.605249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.605330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.605346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.605497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.605514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.606303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.606336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.346 [2024-12-13 09:37:32.606567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.346 [2024-12-13 09:37:32.606586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.346 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.606699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.606715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.606813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.606829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.606980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.606997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.607179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.607194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.607417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.607433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.607546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.607561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.607712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.607729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.607879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.607895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.608155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.608172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.608253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.608268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.608484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.608502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.608692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.608708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.608872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.608888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.609103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.609119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.609220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.609238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.609340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.609356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.609468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.609485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.609637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.609654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.609730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.609746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.609911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.609927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.610012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.610027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.610113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.610128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.610231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.610246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.610336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.610351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.610461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.610478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.610727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.610757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.610946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.610963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.611047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.611064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.611167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.611183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.611265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.611280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.611356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.611371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.611463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.611480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.611572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.611588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.611664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.611680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.611821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.611837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.611982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.611999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.612075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.612091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.612185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.612203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-12-13 09:37:32.612299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.347 [2024-12-13 09:37:32.612314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.613150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.613181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.613434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.613458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.613694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.613711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.613801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.613816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.613894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.613910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.614095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.614111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.614222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.614237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.614316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.614331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.614429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.614444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.614671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.614688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.614865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.614881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.615031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.615048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.615132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.615148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.615258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.615281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.615367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.615383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.615534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.615551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.615650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.615666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.615754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.615770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.615870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.615887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.616046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.616064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.616145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.616160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.616258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.616275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.616365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.616381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.616463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.616479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.616564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.616580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.616658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.616674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.616772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.616787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.616860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.616875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.616946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.616961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.617102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.617118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.617245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.617261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.617342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.617359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.617462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.348 [2024-12-13 09:37:32.617478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.348 qpair failed and we were unable to recover it. 00:26:20.348 [2024-12-13 09:37:32.617571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.617586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.617735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.617752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.617841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.617857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.617951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.617966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.618038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.618055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.618138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.618153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.618258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.618272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.618416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.618436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.618541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.618556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.618631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.618647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.618725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.618740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.618822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.618838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.618926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.618943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.619033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.619048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.619123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.619137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.619222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.619238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.619409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.619426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.619510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.619526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.619606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.619623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.619693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.619709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.619803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.619819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.619975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.619991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.620109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.620125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.620206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.620224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.620459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.620475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.620563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.620578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.620680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.620695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.620783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.620801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.620886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.620902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.620999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.621015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.621111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.621128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.621208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.621225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.621316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.621333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.621405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.621419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.621511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.621528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.621605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.621623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.621764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.621780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.621871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.621888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.621993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.622009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.349 [2024-12-13 09:37:32.622105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.349 [2024-12-13 09:37:32.622122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.349 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.622208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.622225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.622401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.622418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.622502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.622520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.622598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.622614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.622764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.622780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.622883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.622898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.622987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.623002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.623150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.623167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.623318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.623337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.623428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.623445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.623526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.623540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.623622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.623637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.623723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.623740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.623816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.623831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.623925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.623941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.624033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.624048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.624191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.624207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.624347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.624362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.624447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.624469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.624544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.624558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.624642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.624658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.624736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.624754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.624904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.624920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.624995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.625010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.625151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.625168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.625242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.625256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.625332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.625347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.625437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.625463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.625544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.625560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.625734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.625751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.625847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.625863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.625940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.625955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.626037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.626053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.626126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.626142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.626218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.626234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.626322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.626337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.626416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.626432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.626553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.626593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.626699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.350 [2024-12-13 09:37:32.626717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.350 qpair failed and we were unable to recover it. 00:26:20.350 [2024-12-13 09:37:32.626816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.626832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.626996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.627012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.627114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.627129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.627209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.627225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.627321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.627336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.627422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.627438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.627591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.627607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.627688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.627704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.627797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.627812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.627899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.627915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.628002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.628017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.628109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.628124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.628192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.628207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.628358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.628374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.628461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.628477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.628576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.628592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.628678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.628694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.628774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.628789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.628863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.628880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.628959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.628974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.629047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.629062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.629139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.629155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.629229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.629245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.629399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.629417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.629538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.629554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.629635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.629651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.629744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.629759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.629910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.629926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.630080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.630096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.630177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.630192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.630281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.630295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.630373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.630388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.630472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.630488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.630569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.630583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.630747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.630762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.630844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.351 [2024-12-13 09:37:32.630859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.351 qpair failed and we were unable to recover it. 00:26:20.351 [2024-12-13 09:37:32.631015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.631032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.631108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.631123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.631198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.631224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.631321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.631336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.631419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.631434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.631513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.631528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.631672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.631688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.631775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.631790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.631949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.631966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.632062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.632077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.632162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.632178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.632258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.632274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.632353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.632367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.632437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.632456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.632608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.632627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.632708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.632724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.632878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.632894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.632988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.633003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.633090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.633106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.633181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.633197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.633336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.633353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.633499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.633515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.633661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.633677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.633752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.633767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.633916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.633932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.633999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.634013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.634094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.634110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.634256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.634275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.634418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.634435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.634519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.634534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.634616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.634631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.634776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.634792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.352 qpair failed and we were unable to recover it. 00:26:20.352 [2024-12-13 09:37:32.634868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.352 [2024-12-13 09:37:32.634883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.635100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.635116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.635196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.635211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.635292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.635309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.635383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.635397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.635485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.635501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.635587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.635602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.635751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.635767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.635912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.635928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.636025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.636041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.636130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.636145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.636231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.636248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.636329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.636343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.636549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.636566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.636672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.636688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.636765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.636780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.636863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.636880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.637026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.637041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.637181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.637198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.637282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.637299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.637300] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:26:20.353 [2024-12-13 09:37:32.637353] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.353 [2024-12-13 09:37:32.637381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.637400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.637485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.637500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.637640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.637657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.637729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.637743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.637815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.637829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.637907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.637923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.638000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.638015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.638102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.638116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.638191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.638209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.638284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.638300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.638398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.638414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.638491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.638508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.638583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.638597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.638692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.638707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.638792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.638811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.353 qpair failed and we were unable to recover it. 00:26:20.353 [2024-12-13 09:37:32.638905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.353 [2024-12-13 09:37:32.638924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.639002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.639017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.639093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.639109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.639186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.639201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.639279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.639297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.639383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.639400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.639482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.639499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.639642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.639659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.639731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.639746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.639825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.639842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.639920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.639938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.640082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.640099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.640295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.640311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.640460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.640477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.640717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.640734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.640886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.640903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.641056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.641073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.641159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.641176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.641339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.641366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.641439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.641485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.641653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.641669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.641748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.641764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.641864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.641881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.641956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.641972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.642061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.642077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.642239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.642258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.642347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.642367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.642446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.642467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.642549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.642565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.642660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.642676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.642757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.642779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.643016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.643034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.643255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.643275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.643434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.643456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.643601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.643618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.643694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.643710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.643862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.643880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.643964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.643980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.644062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.644077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.644265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.644281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.644359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.644376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.644454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.644473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.644549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.354 [2024-12-13 09:37:32.644564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.354 qpair failed and we were unable to recover it. 00:26:20.354 [2024-12-13 09:37:32.644720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.644737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.644942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.644962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.645132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.645151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.645250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.645266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.645366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.645382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.645471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.645488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.645575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.645592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.645758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.645774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.645845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.645860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.645936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.645953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.646089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.646109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.646207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.646223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.646377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.646394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.646478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.646495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.646580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.646598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.646690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.646706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.646860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.646875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.646961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.646978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.647183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.647200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.647288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.647305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.647442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.647466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.355 [2024-12-13 09:37:32.647608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.355 [2024-12-13 09:37:32.647625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.355 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.647768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.647785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.647886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.647902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.648055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.648073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.648308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.648324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.648465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.648482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.648634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.648651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.648806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.648823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.648906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.648923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.649009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.649026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.649177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.649193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.649265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.649282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.649441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.649464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.649616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.649634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.649777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.649793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.649954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.649970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.650119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.650136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.650236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.650253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.650408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.650423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.650506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.650524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.650596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.650616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.650766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.650782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.650926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.650942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.651014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.651031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.651235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.651251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.651412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.651428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.651522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.651540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.651613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.651630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.651714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.651730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.651891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.645 [2024-12-13 09:37:32.651911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.645 qpair failed and we were unable to recover it. 00:26:20.645 [2024-12-13 09:37:32.652050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.652079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.652233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.652248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.652393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.652410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.652495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.652513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.652590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.652606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.652707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.652724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.652873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.652889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.653041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.653057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.653214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.653230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.653310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.653327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.653414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.653430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.653515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.653531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.653612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.653630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.653734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.653750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.653841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.653858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.653944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.653961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.654066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.654085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.654249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.654266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.654408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.654425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.654594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.654610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.654687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.654713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.654781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.654798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.654881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.654898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.655068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.655084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.655173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.655190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.655289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.655306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.655418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.655455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.655597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.655614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.655693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.655709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.655788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.655815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.655910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.655926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.656019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.656036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.656189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.656205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.656474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.656493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.656579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.656596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.656678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.656695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.656877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.656893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.657053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.657069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.646 [2024-12-13 09:37:32.657147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.646 [2024-12-13 09:37:32.657164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.646 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.657262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.657279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.657447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.657476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.657551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.657579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.657664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.657680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.657779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.657795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.657943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.657959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.658050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.658067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.658156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.658172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.658316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.658333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.658501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.658519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.658610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.658630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.658716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.658732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.658882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.658898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.659047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.659064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.659141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.659160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.659247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.659263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.659424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.659441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.659535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.659553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.659695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.659711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.659852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.659869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.660023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.660040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.660124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.660141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.660292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.660308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.660396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.660413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.660525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.660543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.660719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.660735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.660883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.660899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.661115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.661132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.661228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.661251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.661394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.661410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.661491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.661508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.661612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.661629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.661781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.661798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.661890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.661905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.662046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.662061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.662149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.662165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.662263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.662279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.662368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.662384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.662524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.647 [2024-12-13 09:37:32.662543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.647 qpair failed and we were unable to recover it. 00:26:20.647 [2024-12-13 09:37:32.662630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.662646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.662835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.662851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.663026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.663050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.663139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.663156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.663238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.663255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.663339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.663356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.663453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.663470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.663622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.663639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.663737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.663755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.663941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.663958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.664118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.664137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.664237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.664253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.664336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.664354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.664432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.664455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.664537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.664553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.664710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.664727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.664887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.664910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.664998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.665017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.665173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.665191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.665336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.665352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.665442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.665465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.665547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.665565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.665722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.665738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.665835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.665851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.665994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.666012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.666102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.666118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.666215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.666231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.666303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.666319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.666404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.666420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.666525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.666547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.666630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.666647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.666747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.666763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.666933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.666949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.667088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.667104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.667193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.667208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.667359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.667377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.667459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.667475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.667572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.667589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.667685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.667702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.648 qpair failed and we were unable to recover it. 00:26:20.648 [2024-12-13 09:37:32.667776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.648 [2024-12-13 09:37:32.667792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.667869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.667885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.668031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.668047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.668122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.668139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.668302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.668318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.668414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.668429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.668679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.668696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.668788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.668804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.668876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.668891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.669058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.669074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.669219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.669235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.669385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.669401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.669483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.669500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.669686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.669702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.669795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.669811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.669919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.669935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.670083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.670099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.670185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.670203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.670363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.670379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.670536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.670553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.670704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.670721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.670823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.670839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.671022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.671038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.671124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.671139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.671355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.671371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.671526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.671543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.671632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.671648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.671826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.671842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.671936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.671953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.672044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.672060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.672156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.672177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.672335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.672351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.672506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.649 [2024-12-13 09:37:32.672523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.649 qpair failed and we were unable to recover it. 00:26:20.649 [2024-12-13 09:37:32.672683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.672700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.672877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.672894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.673069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.673085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.673227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.673244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.673394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.673410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.673622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.673640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.673793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.673808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.673885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.673900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.674007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.674023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.674163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.674179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.674344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.674360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.674528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.674545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.674650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.674667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.674750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.674766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.674920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.674936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.675035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.675052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.675152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.675168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.675260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.675275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.675360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.675376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.675463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.675479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.675573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.675590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.675827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.675843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.675932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.675948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.676029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.676044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.676139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.676171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.676268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.676290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.676372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.676388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.676639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.676656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.676796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.676814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.676971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.676988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.677073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.677090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.677245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.677261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.677353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.677369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.677565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.677582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.677675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.677692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.677840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.677856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.677957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.677972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.678067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.678082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.678239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.678255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.650 [2024-12-13 09:37:32.678345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.650 [2024-12-13 09:37:32.678362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.650 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.678519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.678536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.678719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.678735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.678802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.678819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.678960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.678976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.679127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.679144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.679295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.679311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.679405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.679423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.679514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.679531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.679688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.679704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.679798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.679815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.679910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.679926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.680010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.680029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.680182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.680198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.680418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.680435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.680584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.680601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.680744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.680760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.680847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.680863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.680949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.680965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.681063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.681080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.681218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.681235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.681379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.681395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.681617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.681634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.681790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.681806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.681949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.681965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.682058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.682075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.682237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.682255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.682348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.682365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.682463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.682480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.682577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.682593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.682665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.682681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.682855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.682870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.683088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.683104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.683188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.683204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.683362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.683377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.683540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.683558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.683641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.683657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.683896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.683913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.683998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.684015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.684240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.651 [2024-12-13 09:37:32.684257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.651 qpair failed and we were unable to recover it. 00:26:20.651 [2024-12-13 09:37:32.684400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.684416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.684531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.684548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.684739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.684755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.684914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.684929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.685067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.685085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.685228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.685245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.685324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.685339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.685438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.685459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.685534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.685550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.685628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.685645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.685789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.685805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.685914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.685930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.686141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.686156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.686236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.686254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.686367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.686383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.686523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.686541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.686696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.686713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.686855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.686871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.686957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.686973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.687158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.687174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.687330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.687346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.687493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.687510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.687593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.687610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.687824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.687840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.687915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.687930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.688094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.688111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.688261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.688276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.688379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.688395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.688553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.688570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.688715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.688731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.688939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.688956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.689053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.689069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.689155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.689171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.689384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.689401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.689494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.689512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.689599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.689616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.689702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.689718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.689900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.689916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.690093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.690109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.690272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.652 [2024-12-13 09:37:32.690288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.652 qpair failed and we were unable to recover it. 00:26:20.652 [2024-12-13 09:37:32.690387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.690406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.690564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.690581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.690728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.690744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.690933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.690949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.691097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.691113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.691282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.691299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.691485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.691502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.691674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.691689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.691760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.691776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.691988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.692004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.692087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.692104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.692261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.692276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.692462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.692480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.692558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.692574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.692660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.692677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.692766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.692782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.692885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.692900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.693133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.693148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.693239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.693256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.693483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.693499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.693675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.693692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.693838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.693854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.693939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.693954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.694032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.694049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.694212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.694228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.694441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.694464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.694534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.694550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.694761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.694777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.694890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.694906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.695060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.695075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.695226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.695243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.695459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.695476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.695579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.695594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.695685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.695702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.695852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.695868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.696023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.696039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.696196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.696212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.696304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.696319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.696406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.653 [2024-12-13 09:37:32.696422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.653 qpair failed and we were unable to recover it. 00:26:20.653 [2024-12-13 09:37:32.696600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.696617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.696764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.696782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.696881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.696900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.697036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.697051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.697190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.697206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.697368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.697384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.697482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.697500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.697589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.697605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.697705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.697724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.697887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.697904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.698009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.698026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.698126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.698142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.698236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.698252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.698376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.698393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.698555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.698573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.698733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.698751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.698987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.699005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.699097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.699113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.699200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.699216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.699395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.699410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.699564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.699582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.699688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.699703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.699878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.699895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.699996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.700011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.700113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.700128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.700203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.700220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.700294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.700310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.700402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.700418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.700519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.700535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.700679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.700699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.700918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.700934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.701008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.701024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.701214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.701230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.701395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.701412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.654 [2024-12-13 09:37:32.701589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.654 [2024-12-13 09:37:32.701607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.654 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.701762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.701778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.701998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.702015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.702159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.702176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.702298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.702314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.702459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.702475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.702628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.702644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.702803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.702818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.702971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.702988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.703082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.703097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.703194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.703211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.703421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.703438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.703569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.703586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.703671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.703687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.703779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.703796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.703941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.703957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.704026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.704041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.704133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.704149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.704222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.704238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.704394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.704410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.704518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.704535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.704636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.704652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.704807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.704823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.704999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.705016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.705103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.705119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.705326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.705342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.705553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.705573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.705660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.705676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.705836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.705853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.706060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.706076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.706177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.706193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.706344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.706361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.706509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.706526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.706673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.706689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.706830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.706845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.706941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.706958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.707116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.707135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.707306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.707322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.707415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.707432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.707528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.707553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.655 qpair failed and we were unable to recover it. 00:26:20.655 [2024-12-13 09:37:32.707663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.655 [2024-12-13 09:37:32.707680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.707821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.707837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.707981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.707997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.708099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.708114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.708198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.708215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.708304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.708320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.708413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.708438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.708537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.708555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.708697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.708713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.708879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.708895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.708999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.709016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.709111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.709128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.709222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.709237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.709315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.709332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.709418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.709434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.709524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.709540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.709640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.709658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.709799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.709814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.709970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.709986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.710081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.710097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.710191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.710208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.710296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.710311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.710401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.710418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.710512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.710528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.710675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.710692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.710839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.710855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.710944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.710960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.711103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.711119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.711218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.711234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.711388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.711405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.711612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.711628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.711737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.711753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.711960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.711976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.712049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.712065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.712163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.712179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.712322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.712338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.712506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.712528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.712619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.712636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.712726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.712741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.656 [2024-12-13 09:37:32.712976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.656 [2024-12-13 09:37:32.712993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.656 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.713136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.713153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.713241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.713256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.713344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.713360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.713512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.713530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.713617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.713632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.713721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.713737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.713898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.713914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.714000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.714017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.714094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.714109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.714204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.714222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.714385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.714401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.714571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.714588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.714689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.714706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.714781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.714797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.714875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.714890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.714982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.714998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.715165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.715182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.715276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.715291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.715382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.715398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.715537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.715554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.715645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.715662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.715759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.715777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.715860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.715875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.715971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.715987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.716142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.716158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.716253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.716268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.716425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.716441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.716533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.716550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.716703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.716721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.716806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.716822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.716907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.716923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.717002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.717019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.717179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.717196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.717276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.717291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.717384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.717400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.717501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.717517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.717611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.717630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.717721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.657 [2024-12-13 09:37:32.717738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.657 qpair failed and we were unable to recover it. 00:26:20.657 [2024-12-13 09:37:32.717885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.717902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.718053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.718070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.718160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.718176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.718315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.718331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.718438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.718459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.718551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.718567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.718654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.718670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.718834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.718850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.719078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.719095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.719184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.719200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.719360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.719376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.719464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.719481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.719656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.719673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.719843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.719859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.720012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.720029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.720116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.720131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.720228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.720245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.720347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.720363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.720511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.720528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.720608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.720625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.720732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.720748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.720908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.720924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.721014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.721031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.721115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.721132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.721213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.721229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.721445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.721466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.721562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.721578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.721737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.721753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.721847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.721864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.722028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.722046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.722142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.722159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.722333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.722349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.722492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.722509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.722586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.658 [2024-12-13 09:37:32.722603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.658 qpair failed and we were unable to recover it. 00:26:20.658 [2024-12-13 09:37:32.722767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.722783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.722947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.722963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.723040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.723057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.723136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.723153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.723242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.723261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.723334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.723351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.723495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.723512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.723598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.723614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.723780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.723796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.723891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.723908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.724039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.724055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.724082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:20.659 [2024-12-13 09:37:32.724155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.724172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.724379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.724396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.724476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.724493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.724737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.724754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.724911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.724927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.725140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.725157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.725368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.725387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.725481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.725499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.725646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.725672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.725825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.725842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.725940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.725956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.726105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.726121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.726204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.726220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.726382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.726398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.726562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.726579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.726672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.726689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.726778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.726794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.726953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.726969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.727048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.727064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.727226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.727242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.727388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.727405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.727508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.727525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.727675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.727692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.727767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.727783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.727923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.727938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.728090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.728106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.728263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.728280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.659 [2024-12-13 09:37:32.728421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.659 [2024-12-13 09:37:32.728437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.659 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.728583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.728601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.728842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.728859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.729014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.729030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.729123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.729141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.729219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.729235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.729326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.729342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.729528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.729546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.729629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.729645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.729794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.729811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.729971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.729988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.730087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.730104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.730274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.730290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.730478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.730496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.730640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.730656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.730866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.730882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.730970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.730987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.731065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.731082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.731178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.731195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.731291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.731311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.731550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.731568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.731730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.731747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.731907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.731924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.732022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.732039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.732205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.732221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.732422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.732439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.732602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.732620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.732706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.732723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.732815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.732831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.733026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.733043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.733201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.733219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.733301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.733317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.733394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.733411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.733504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.733521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.733736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.733753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.733841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.733857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.733946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.733963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.734177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.734195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.734458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.734477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.734660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.660 [2024-12-13 09:37:32.734678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.660 qpair failed and we were unable to recover it. 00:26:20.660 [2024-12-13 09:37:32.734775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.734793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.734868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.734885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.735030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.735048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.735209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.735227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.735465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.735491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.735636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.735654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.735757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.735773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.735948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.735964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.736069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.736087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.736184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.736201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.736286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.736302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.736532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.736551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.736741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.736758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.736847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.736863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.736955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.736972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.737194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.737211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.737383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.737399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.737555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.737572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.737731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.737748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.737848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.737870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.738065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.738082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.738194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.738210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.738310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.738327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.738418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.738434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.738518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.738535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.738692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.738708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.738869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.738886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.739030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.739047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.739124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.739141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.739244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.739261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.739340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.739366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.739489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.739522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.739676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.739699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.739802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.739819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.739982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.739998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.740143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.740159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.740251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.740267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.740409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.740425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.661 [2024-12-13 09:37:32.740593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.661 [2024-12-13 09:37:32.740610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.661 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.740779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.740795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.740882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.740897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.740983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.741000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.741081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.741097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.741260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.741275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.741460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.741477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.741642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.741659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.741896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.741915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.742021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.742037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.742127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.742143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.742234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.742250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.742460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.742478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.742565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.742581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.742662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.742677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.742764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.742779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.742942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.742959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.743102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.743118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.743203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.743220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.743307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.743323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.743422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.743439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.743535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.743551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.743643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.743661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.743757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.743773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.744002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.744018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.744099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.744117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.744260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.744276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.744486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.744503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.744649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.744666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.744855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.744871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.745028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.745045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.745207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.745222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.745480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.745496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.745581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.745598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.745764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.745781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.745927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.745943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.746097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.746115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.746219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.746235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.746395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.746411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.746511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.662 [2024-12-13 09:37:32.746530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.662 qpair failed and we were unable to recover it. 00:26:20.662 [2024-12-13 09:37:32.746631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.746648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.746810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.746826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.746964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.746980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.747085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.747100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.747180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.747196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.747290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.747307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.747478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.747495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.747662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.747680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.747779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.747796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.747950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.747970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.748059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.748077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.748233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.748248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.748478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.748497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.748651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.748667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.748756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.748772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.748947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.748962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.749053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.749069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.749146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.749164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.749332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.749349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.749499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.749517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.749606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.749622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.749788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.749805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.749952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.749970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.750118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.750134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.750316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.750332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.750434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.750459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.750626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.750641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.750736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.750754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.750906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.750921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.751072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.751089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.751187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.751203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.751301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.751319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.751420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.751437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.751529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.751545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.751630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.663 [2024-12-13 09:37:32.751646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.663 qpair failed and we were unable to recover it. 00:26:20.663 [2024-12-13 09:37:32.751816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.751834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.751993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.752013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.752125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.752141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.752225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.752245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.752462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.752480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.752568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.752584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.752668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.752684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.752767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.752783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.752942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.752961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.753066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.753083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.753256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.753272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.753446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.753470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.753563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.753579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.753672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.753689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.753772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.753788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.753892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.753909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.754005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.754023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.754171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.754188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.754267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.754284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.754432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.754454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.754533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.754551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.754626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.754642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.754724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.754741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.754835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.754851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.754934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.754950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.755048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.755064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.755171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.755187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.755340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.755358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.755446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.755474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.755620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.755636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.755733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.755749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.755829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.755845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.755996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.756013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.756098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.756113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.756202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.756219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.756382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.756398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.756500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.756516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.756670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.756688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.756781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-12-13 09:37:32.756797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.664 qpair failed and we were unable to recover it. 00:26:20.664 [2024-12-13 09:37:32.756876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.756892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.756969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.756986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.757125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.757142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.757233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.757249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.757334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.757352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.757429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.757446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.757535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.757551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.757639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.757656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.757803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.757819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.757895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.757911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.758009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.758026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.758104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.758120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.758196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.758212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.758300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.758318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.758414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.758430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.758583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.758600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.758706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.758723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.758800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.758816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.758891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.758907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.758988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.759004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.759152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.759168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.759320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.759338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.759417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.759433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.759669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.759687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.759831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.759847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.759940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.759955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.760030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.760046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.760131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.760147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.760230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.760246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.760347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.760366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.760462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.760479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.760558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.760575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.760651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.760667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.760818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.760835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.760922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.760939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.761029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.761045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.761137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.761154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.761307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.761323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.761483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-12-13 09:37:32.761500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.665 qpair failed and we were unable to recover it. 00:26:20.665 [2024-12-13 09:37:32.761650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.761668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.761910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.761927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.762090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.762106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.762248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.762264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.762351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.762367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.762507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.762525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.762624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.762639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.762721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.762737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.762884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.762900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.762983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.763001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.763106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.763123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.763272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.763288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.763443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.763464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.763547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.763564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.763723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.763739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.763823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.763840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.763918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.763934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.764077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.764094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.764239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.764255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.764436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.764459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.764554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.764569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.764715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.764732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.764810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.764826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.764967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.764984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.765128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.765145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.765245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.765263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.765349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.765365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.765506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.765525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.765613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.765629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.765785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.765801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.765892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.765912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.765993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.766009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.766088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.766105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.766193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.766209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.766292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.766308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.766466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.766483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.766561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.766577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.766718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.766735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.666 [2024-12-13 09:37:32.766813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-12-13 09:37:32.766830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.666 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.766904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.766920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.767004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.767022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.767107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.767123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.767217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.767233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.767313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.767331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.767496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.767513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.767700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.767717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.767796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.767814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.767912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.767929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.768022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.768039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.768259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.768276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.768356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.768373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.768416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.667 [2024-12-13 09:37:32.768443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.667 [2024-12-13 09:37:32.768452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.768456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.667 [2024-12-13 09:37:32.768465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.667 [2024-12-13 09:37:32.768468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b9[2024-12-13 09:37:32.768470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.667 0 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.768550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.768568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.768660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.768677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.768756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.768772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.768883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.768917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.769113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.769146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.769247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.769272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.769370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.769386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.769479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.769507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.769600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.769617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.769701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.769719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.769739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:20.667 [2024-12-13 09:37:32.769876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.769892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.769828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:20.667 [2024-12-13 09:37:32.769932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:20.667 [2024-12-13 09:37:32.770005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.770022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.667 [2024-12-13 09:37:32.769933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.770270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.770288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.770369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.770386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.770477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.770494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.770585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.770606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.770756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.667 [2024-12-13 09:37:32.770773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.667 qpair failed and we were unable to recover it. 00:26:20.667 [2024-12-13 09:37:32.770873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.770890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.770981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.770998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.771092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.771109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.771287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.771303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.771455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.771474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.771559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.771576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.771677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.771693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.771900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.771918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.772009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.772025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.772184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.772201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.772345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.772362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.772441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.772462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.772551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.772567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.772642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.772658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.772801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.772818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.772975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.772991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.773066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.773082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.773175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.773191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.773272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.773288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.773458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.773476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.773619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.773635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.773738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.773755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.773834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.773849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.773926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.773943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.774038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.774054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.774133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.774149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.774294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.774311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.774401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.774418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.774565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.774583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.774725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.774743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.774825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.774841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.774983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.775001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.775144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.775167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.775314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.775331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.775563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.775580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.775676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.775692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.775791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.775808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.775949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.775966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.776057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.668 [2024-12-13 09:37:32.776077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.668 qpair failed and we were unable to recover it. 00:26:20.668 [2024-12-13 09:37:32.776224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.776240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.776318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.776335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.776417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.776434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.776535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.776553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.776771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.776788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.776887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.776905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.776985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.777002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.777076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.777093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.777188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.777204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.777279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.777296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.777439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.777461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.777557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.777574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.777741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.777757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.777919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.777937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.778017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.778033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.778113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.778129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.778292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.778309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.778398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.778415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.778501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.778519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.778613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.778629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.778799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.778816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.778957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.778975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.779083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.779100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.779256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.779272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.779372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.779389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.779484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.779502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.779660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.779678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.779780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.779797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.779941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.779959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.780118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.780135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.780217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.780232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.780310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.780327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.780419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.780436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.780681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.780698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.780804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.780822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.780975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.780993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.781182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.781200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.781381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.781397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.669 qpair failed and we were unable to recover it. 00:26:20.669 [2024-12-13 09:37:32.781473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.669 [2024-12-13 09:37:32.781490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.781645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.781667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.781764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.781781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.781857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.781874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.782053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.782071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.782282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.782299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.782403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.782420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.782533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.782553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.782708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.782727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.782896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.782914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.782993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.783010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.783171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.783189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.783288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.783304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.783461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.783478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.783566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.783585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.783665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.783682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.783839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.783857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.783946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.783963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.784105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.784123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.784282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.784299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.784383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.784400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.784540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.784558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.784810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.784829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.784915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.784931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.785041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.785058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.785145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.785160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.785249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.785265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.785357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.785375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.785469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.785487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.785695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.785714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.785859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.785876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.786090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.786108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.786252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.786271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.786359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.786377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.786468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.786485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.786574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.786591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.786740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.786758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.786902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.786919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.787024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.670 [2024-12-13 09:37:32.787041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.670 qpair failed and we were unable to recover it. 00:26:20.670 [2024-12-13 09:37:32.787188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.787205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.787300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.787317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.787404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.787423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.787506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.787524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.787757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.787774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.787889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.787913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.788001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.788020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.788180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.788196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.788356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.788373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.788474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.788493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.788583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.788600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.788743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.788759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.788856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.788873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.789022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.789038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.789130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.789146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.789307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.789323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.789405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.789422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.789584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.789602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.789684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.789701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.789867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.789883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.789963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.789979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.790061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.790077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.790225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.790242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.790355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.790371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.790517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.790534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.790612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.790628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.790726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.790742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.790890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.790906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.790980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.790997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.791194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.791226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.791403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.791422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.791509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.791526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.791620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.791637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.791775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.791792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.792046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.792062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.792218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.792235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.792374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.792390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.671 qpair failed and we were unable to recover it. 00:26:20.671 [2024-12-13 09:37:32.792553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.671 [2024-12-13 09:37:32.792571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.792721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.792736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.792833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.792848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.792937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.792953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.793042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.793057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.793199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.793215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.793319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.793336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.793429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.793445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.793532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.793549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.793627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.793644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.793795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.793812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.793950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.793967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.794049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.794065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.794161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.794176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.794321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.794338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.794431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.794446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.794541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.794558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.794654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.794669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.794829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.794846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.794945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.794964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.795103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.795120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.795263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.795280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.795363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.795379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.795457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.795474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.795569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.795586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.795671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.795687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.795768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.795785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.795976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.795995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.796077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.796094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.796187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.796204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.796308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.796326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.796535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.796557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.796707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.796726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.796832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.796850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.797073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.797093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.797239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.797258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.797443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.672 [2024-12-13 09:37:32.797470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.672 qpair failed and we were unable to recover it. 00:26:20.672 [2024-12-13 09:37:32.797642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.797659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.797834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.797850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.798005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.798024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.798265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.798283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.798394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.798410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.798524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.798541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.798715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.798731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.798931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.798947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.799111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.799128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.799339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.799360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.799531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.799549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.799699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.799715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.799867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.799883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.800088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.800104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.800335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.800352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.800563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.800581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.800713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.800728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.800822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.800838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.800978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.800993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.801066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.801083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.801284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.801301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.801394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.801409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.801689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.801707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.801853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.801871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.802096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.802113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.802250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.802266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.802482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.802499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.802725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.802742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.802851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.802867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.802976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.802991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.803131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.803148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.803386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.803402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.803622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.803640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.803781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.803797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.803946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.803962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.804156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.804172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.804322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.804338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.804533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.804550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.804796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.804813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.673 [2024-12-13 09:37:32.804979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.673 [2024-12-13 09:37:32.804994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.673 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.805134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.805151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.805377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.805393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.805624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.805642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.805853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.805870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.806101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.806116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.806224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.806240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.806340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.806356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.806453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.806469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.806546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.806563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.806649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.806664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.806834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.806858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.806999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.807015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.807174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.807191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.807332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.807348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.807511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.807527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.807606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.807622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.807711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.807727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.807820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.807836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.807982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.807997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.808144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.808160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.808247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.808263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.808399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.808414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.808514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.808530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.808680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.808696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.808858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.808875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.808977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.808993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.809076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.809093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.809196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.809212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.809299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.809315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.809412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.809428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.809653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.809670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.809768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.809785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.809883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.809899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.810038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.810054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.810215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.810231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.810305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.810322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.810466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.810483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.810640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.810656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.810818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.674 [2024-12-13 09:37:32.810835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.674 qpair failed and we were unable to recover it. 00:26:20.674 [2024-12-13 09:37:32.811008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.811026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.811180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.811196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.811340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.811357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.811502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.811518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.811620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.811637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.811722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.811738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.811948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.811964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.812175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.812191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.812412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.812429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.812681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.812698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.812861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.812877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.813038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.813055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.813239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.813284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.813521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.813554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.813713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.813731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.813997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.814013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.814116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.814133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.814312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.814329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.814493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.814510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.814696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.814712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.814864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.814880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.815032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.815048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.815149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.815166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.815309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.815325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.815534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.815551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.815700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.815721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.815953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.815969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.816198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.816214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.816433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.816455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.816607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.816623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.816829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.816847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.817079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.817096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.817167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.817183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.817280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.817297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.817383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.817399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.817501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.817518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.817746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.817762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.817923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.817938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.818014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.675 [2024-12-13 09:37:32.818030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-12-13 09:37:32.818124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.818140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.818232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.818248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.818438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.818462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.818550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.818567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.818709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.818726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.818913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.818930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.819020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.819036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.819127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.819143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.819217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.819235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.819420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.819437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.819646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.819662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.819829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.819846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.819999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.820016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd568000b90 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.820220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.820255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd56c000b90 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.820362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.820387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.820572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.820591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.820737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.820753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.820895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.820912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.821143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.821160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.821346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.821363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.821597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.821614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.821765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.821781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.821966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.821983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.822197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.822214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.822378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.822395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.822600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.822618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.822734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.822749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.822962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.822978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.823216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.823233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.823458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.823475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.823634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.823651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.823881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.823897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.824128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.824146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.824258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.824274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-12-13 09:37:32.824441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.676 [2024-12-13 09:37:32.824464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.824635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.824651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.824808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.824823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.825030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.825046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.825264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.825280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.825516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.825533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.825625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.825644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.825852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.825868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.826073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.826090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.826250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.826266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.826477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.826494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.826703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.826720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.826956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.826972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.827144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.827160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.827347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.827363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.827459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.827475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.827636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.827652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.827824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.827840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.828078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.828094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.828300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.828316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.828474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.828490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.828633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.828649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.828749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.828765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.828992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.829008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.829155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.829170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.829409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.829424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.829672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.829688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.829924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.829940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.830122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.830138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.830390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.830406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.830569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.830586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.830742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.830757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.830964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.830980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.831195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.831211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.831407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.831423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.831637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.831653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.831888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.831904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.832043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.832058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.832284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.677 [2024-12-13 09:37:32.832300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.677 qpair failed and we were unable to recover it. 00:26:20.677 [2024-12-13 09:37:32.832457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.832473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.832706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.832722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.832948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.832965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.833195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.833211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.833443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.833464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.833608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.833624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.833784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.833799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.833968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.833984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.834209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.834229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.834477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.834495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.834732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.834749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.834931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.834947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.835196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.835211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.835369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.835385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.835595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.835612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.835764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.835780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.836003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.836019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.836176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.836191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.836427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.836444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.836597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.836614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.836795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.836811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.836965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.836980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.837141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.837157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.837310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.837326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.837430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.837446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.837594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.837610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.837791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.837807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.837987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.838004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.838232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.838248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.838335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.838351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.838559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.838576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.838834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.838850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.839072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.839088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.839236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.839251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.839484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.839500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.839695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.839711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.839889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.839905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.839982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.839998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.840080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.678 [2024-12-13 09:37:32.840096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.678 qpair failed and we were unable to recover it. 00:26:20.678 [2024-12-13 09:37:32.840318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.840334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.840572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.840589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.840776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.840792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.840891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.840907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.841048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.841063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.841278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.841294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.841546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.841563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.841728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.841744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.841989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.842004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.842214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.842234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.842394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.842410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.842639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.842655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.842815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.842831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.842991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.843007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.843145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.843161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.843364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.843380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.843540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.843557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.843778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.843794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.843965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.843981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.844186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.844202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.844357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.844373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.844525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.844541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.844746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.844761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.844907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.844922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.845019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.845035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.845107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.845123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.845268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.845284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.845441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.845462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.845532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.845548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.845760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.845776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.845861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.845877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.846029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.846045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.846137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.846153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.846233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.846249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.846484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.846501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.846680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.846696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.846920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.846936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.847075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.847090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.847232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.679 [2024-12-13 09:37:32.847247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.679 qpair failed and we were unable to recover it. 00:26:20.679 [2024-12-13 09:37:32.847405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.847421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.847504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.847520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.847746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.847762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.847869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.847885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.848042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.848057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.848142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.848157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.848328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.848344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.848431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.848447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.848541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.848557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.848715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.848731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.848903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.848922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.849024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.849039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.849121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.849137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.849285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.849301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.849388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.849403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.849506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.849522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.849663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.849678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.849770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.849785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.849880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.849895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.849982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.849998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.850203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.850219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.850459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.850475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.850619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.850635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.850732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.850748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.850968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.850984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.851073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.851089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.851309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.851325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.851557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.851573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.851841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.851857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.852071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.852086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.852342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.852358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.852591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.852607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.852779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.852795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.852949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.852965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.680 [2024-12-13 09:37:32.853150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.680 [2024-12-13 09:37:32.853166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.680 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.853262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.853277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.853436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.853455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.853616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.853631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.853769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.853784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.853957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.853973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.854229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.854244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.854466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.854482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.854718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.854734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.854940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.854955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.855101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.855117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.855297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.855312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.855463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.855479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.855732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.855748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.855960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.855975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.856138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.856154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.856310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.856331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.856584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.856601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.856765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.856780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.856946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.856962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.857117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.857132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.857280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.857295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.857457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.857474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.857733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.857749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.858016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.858032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.858204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.858219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.858381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.681 [2024-12-13 09:37:32.858396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.681 qpair failed and we were unable to recover it. 00:26:20.681 [2024-12-13 09:37:32.858538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.858555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.858653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.858669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.858815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.858830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.859061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.859077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.859236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.859252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.859480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.859497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.859654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.859670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.859826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.859842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.859997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.860012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.860151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.860166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.860342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.860358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.860515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.860531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.860757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.860773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.860954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.860969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.861179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.861195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.861347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.861363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.861568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.861584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.861727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.861743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.861965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.861980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.862213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.862228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.862478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.862494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.862703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.862718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.862974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.862990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.863198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.863214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.863462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.863478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.863713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.863728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.863882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.863898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.864073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.864088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.864228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.864243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.864497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.864516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.864755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.864771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.864930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.864945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.865178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.865194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.865431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.865446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.865622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.865637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.865872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.865888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.866034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.866050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.866234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.866249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.866476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.866493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.682 qpair failed and we were unable to recover it. 00:26:20.682 [2024-12-13 09:37:32.866651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.682 [2024-12-13 09:37:32.866666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.866906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.866922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.867104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.867120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.867205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.867220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.867406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.867421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.867613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.867629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.867856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.867871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.868011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.868026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.868184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.868199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.868348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.868363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.868618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.868634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.868809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.868824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.868994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.869009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.869215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.869230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.869408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.869424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.869635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.869652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.869903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.869918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.870180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.870195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.870345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.870360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.870564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.870580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.870722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.870737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.870906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.870921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.871106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.871121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.871355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.871370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.871609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.871625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.871856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.871871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.871979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.871994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.872228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.872244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.872395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.872411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.872581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.872598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.872743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.872761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.873002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.873018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.873120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.873136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.873378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.873394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.873534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.873550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.873779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.873795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.873900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.873916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.683 [2024-12-13 09:37:32.874170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.874188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 [2024-12-13 09:37:32.874279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.683 [2024-12-13 09:37:32.874294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.683 qpair failed and we were unable to recover it. 00:26:20.683 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:20.684 [2024-12-13 09:37:32.874500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.874518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.874771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.874788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:20.684 [2024-12-13 09:37:32.874941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.874958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:20.684 [2024-12-13 09:37:32.875187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.875205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.875284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.875299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.875406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.875423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.684 [2024-12-13 09:37:32.875583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.875600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.875826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.875842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.876020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.876036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.876277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.876293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.876468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.876486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.876697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.876713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.876903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.876918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.877074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.877090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.877324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.877340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.877565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.877582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.877752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.877768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.877929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.877944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.878148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.878165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.878327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.878343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.878575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.878591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.878753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.878768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.878932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.878948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.879137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.879154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.879399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.879416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.879577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.879594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.879764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.879780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.880029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.880044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.880146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.880162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.880245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.880263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.880454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.880469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.880684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.880699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.880904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.880920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.881028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.881043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.881138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.881154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.881362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.881378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.881545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.684 [2024-12-13 09:37:32.881562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.684 qpair failed and we were unable to recover it. 00:26:20.684 [2024-12-13 09:37:32.881651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.881669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.881823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.881840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.881993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.882009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.882098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.882113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.882217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.882232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.882380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.882395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.882538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.882555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.882709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.882726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.882867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.882883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.882976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.882992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.883078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.883095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.883171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.883187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.883265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.883280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.883374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.883390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.883499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.883515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.883685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.883701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.883889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.883905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.884066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.884082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.884154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.884170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.884316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.884331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.884549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.884566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.884724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.884739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.884947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.884964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.885053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.885069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.885155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.885173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.885331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.885347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.885438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.885475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.885568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.885583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.885698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.885714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.885923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.885938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.886144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.886160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.886307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.886323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.886418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.886438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.886517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.886534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.886616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.685 [2024-12-13 09:37:32.886632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.685 qpair failed and we were unable to recover it. 00:26:20.685 [2024-12-13 09:37:32.886736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.886752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.886900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.886916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.886993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.887009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.887163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.887179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.887275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.887291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.887387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.887404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.887567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.887584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.887661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.887676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.887843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.887859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.888055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.888071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.888232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.888248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.888361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.888377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.888455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.888471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.888556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.888572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.888669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.888684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.888828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.888844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.888924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.888941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.889114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.889130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.889214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.889229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.889323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.889339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.889475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.889491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.889730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.889745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.889851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.889866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.889953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.889968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.890058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.890074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.890160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.890176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.890260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.890275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.890346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.890362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.890460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.890476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.890554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.890570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.890647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.890662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.890893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.890909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.891056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.891072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.891172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.891187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.891336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.891352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.891530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.891547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.891692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.891707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.891784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.891802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.891889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.686 [2024-12-13 09:37:32.891905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.686 qpair failed and we were unable to recover it. 00:26:20.686 [2024-12-13 09:37:32.892134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.892150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.892241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.892257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.892390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.892406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.892564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.892581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.892677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.892693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.892857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.892873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.893084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.893100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.893306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.893323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.893497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.893513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.893608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.893623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.893732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.893747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.893837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.893852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.894012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.894030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.894190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.894206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.894334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.894349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.894490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.894506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.894713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.894728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.894800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.894815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.894914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.894929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.895019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.895034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.895174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.895190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.895362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.895378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.895461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.895478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.895620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.895636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.895721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.895737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.895901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.895916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.896001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.896017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.896162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.896177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.896264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.896281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.896431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.896446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.896565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.896581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.896745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.896760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.896914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.896929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.897082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.897097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.897262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.897277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.897442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.897463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.897610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.897626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.897716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.687 [2024-12-13 09:37:32.897732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.687 qpair failed and we were unable to recover it. 00:26:20.687 [2024-12-13 09:37:32.897889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.897908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.898011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.898026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.898114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.898130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.898289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.898305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.898469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.898486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.898598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.898614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.898715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.898730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.898810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.898825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.898913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.898930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.899138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.899154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.899301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.899317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.899404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.899424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.899524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.899540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.899615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.899631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.899721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.899737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.899828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.899843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.899930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.899947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.900027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.900043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.900190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.900205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.900283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.900298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.900380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.900397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.900561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.900578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.900663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.900678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.900762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.900778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.900874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.900890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.900988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.901003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.901087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.901103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.901185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.901200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.901291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.901308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.901380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.901395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.901485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.901502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.901589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.901604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.901747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.901762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.901916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.901931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.902023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.902038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.902214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.902229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.902391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.902407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.902567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.902583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.902724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.902739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.902897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.902913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.903126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.903145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.903296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.903311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.688 qpair failed and we were unable to recover it. 00:26:20.688 [2024-12-13 09:37:32.903519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.688 [2024-12-13 09:37:32.903535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.903693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.903709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.903785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.903800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.903962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.903978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.904229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.904245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.904420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.904435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.904669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.904685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.904862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.904877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.905048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.905063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.905295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.905311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.905471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.905487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.905648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.905667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.905837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.905854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.905963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.905978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.906239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.906256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.906355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.906371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.906479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.906495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.906645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.906661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.906803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.906819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.906917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.906932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.907091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.907106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.907269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.907285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.907379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.907394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.907546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.907562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.907707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.907723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.907832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.907847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.907931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.907948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.689 [2024-12-13 09:37:32.908107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.908123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.908280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.908295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:20.689 [2024-12-13 09:37:32.908504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.908522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.908666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.908683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.689 [2024-12-13 09:37:32.908785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.908802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.908883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.908898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.908988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.909003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.689 [2024-12-13 09:37:32.909155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.909172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.909280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.909295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.909385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.909403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.909496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.909513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.689 qpair failed and we were unable to recover it. 00:26:20.689 [2024-12-13 09:37:32.909654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.689 [2024-12-13 09:37:32.909669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.909772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.909788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.909873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.909889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.909968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.909984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.910071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.910086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.910232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.910248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.910393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.910409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.910555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.910570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.910658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.910674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.910769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.910785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.910931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.910946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.911116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.911132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.911233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.911248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.911411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.911426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.911518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.911534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.911689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.911704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.911783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.911798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.912007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.912022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.912116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.912131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.912228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.912243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.912317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.912332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.912417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.912432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.912542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.912558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.912635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.912650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.912744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.912760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.912835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.912851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.690 [2024-12-13 09:37:32.912924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.690 [2024-12-13 09:37:32.912939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.690 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.913029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.913044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.913129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.913144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.913283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.913298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.913383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.913398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.913496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.913512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.913595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.913610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.913686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.913700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.913775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.913790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.913891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.913906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.913990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.914005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.914092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.914107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.914189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.914207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.914292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.914307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.914395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.914411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.914488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.914503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.914591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.914606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.914754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.914771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.914852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.914867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.914941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.914956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.915100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.915116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.915188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.915203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.915286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.915301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.915382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.915397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.915541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.915558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.915710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.915726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.915870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.915887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.915980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.915997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.916140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.916156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.916299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.916314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.916408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.916424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.916584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.916601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.916689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.916705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.916780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.916796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.917034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.917050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.917130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.917146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.917286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.917301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.917443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.917464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.917642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.917658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.917889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.917905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.691 [2024-12-13 09:37:32.917999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.691 [2024-12-13 09:37:32.918014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.691 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.918320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.918337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.918493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.918509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.918739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.918755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.918856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.918871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.918979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.918995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.919256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.919272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.919422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.919436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.919621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.919637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.919724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.919738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.919920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.919935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.920042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.920058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.920149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.920166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.920320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.920336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.920552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.920568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.920668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.920683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.920776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.920791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.920950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.920964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.921186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.921201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.921344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.921360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.921533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.921549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.921651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.921666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.921904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.921919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.922080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.922095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.922234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.922248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.922475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.922491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.922604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.922620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.922771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.922786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.923021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.923036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.923187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.923202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.923456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.923471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.923633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.923649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.923834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.923848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.924004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.924019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.924160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.924176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.924283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.924298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.924477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.924493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.924737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.924752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.924943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.924958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd574000b90 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.925253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.925283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.925382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.925398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.925566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.925583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.692 [2024-12-13 09:37:32.925744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.692 [2024-12-13 09:37:32.925760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.692 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.925870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.925885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.926117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.926133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.926283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.926298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.926519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.926537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.926694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.926709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.926858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.926873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.927049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.927065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.927282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.927297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.927392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.927407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.927638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.927655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.927886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.927902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.928052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.928068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.928276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.928292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.928531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.928548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.928733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.928749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.929020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.929035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.929194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.929210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.929378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.929394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.929603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.929619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.929836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.929851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.930086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.930101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.930263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.930279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.930453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.930469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.930633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.930652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.930842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.930858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.931022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.931038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.931281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.931297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.931458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.931474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.931630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.931645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.931796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.931812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.931928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.931943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.932106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.932122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.932371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.932386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.932618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.932634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.932799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.932814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.932993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.933008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.933242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.933257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.933468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.933484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.933655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.933671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.933898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.933914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.934018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.934033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.934266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.934282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.693 [2024-12-13 09:37:32.934382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.693 [2024-12-13 09:37:32.934397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.693 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.934635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.934652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.934889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.934905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.935065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.935081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.935261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.935277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.935510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.935527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.935690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.935706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.935858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.935874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.936027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.936043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.936257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.936272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.936511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.936528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.936632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.936648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.936816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.936832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.937071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.937087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.937248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.937264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.937370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.937386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.937574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.937592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.937754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.937770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.937931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.937948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.938121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.938138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.938290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.938306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.938471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.938489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.938588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.938608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.938779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.938797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.938982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.938999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.939228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.939245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.939502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.939518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.939630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.939646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.939783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.939799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.940035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.940051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.940240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.940256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.940361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.940378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.940616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.940633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.940845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.940861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.941040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.941057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.941215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.941231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.694 [2024-12-13 09:37:32.941471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.694 [2024-12-13 09:37:32.941488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.694 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.941589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.941604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.941763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.941779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.941886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.941902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.942042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.942057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.942199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.942215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.942463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.942480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.942634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.942650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.942820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.942836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.943098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.943113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.943272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.943287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.943572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.943589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.943849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.943864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.944119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.944142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.944392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.944407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.944662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.944679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.944884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.944899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.945186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.945212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 Malloc0 00:26:20.695 [2024-12-13 09:37:32.945464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.945483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.945725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.945740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.945947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.945963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.695 [2024-12-13 09:37:32.946105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.946122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.946280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.946295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:20.695 [2024-12-13 09:37:32.946478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.946496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.946669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.946686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.695 [2024-12-13 09:37:32.946920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.946936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.695 [2024-12-13 09:37:32.947202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.947220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.947376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.947391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.947582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.947598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.947698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.947714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.947922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.947939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.948082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.948098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.948319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.948334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.948435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.948455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.948630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.948646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.948873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.948889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.949059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.949074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.949225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.949240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.949517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.949533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.949761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.949776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.949883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.695 [2024-12-13 09:37:32.949898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.695 qpair failed and we were unable to recover it. 00:26:20.695 [2024-12-13 09:37:32.950148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.950163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.950320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.950335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.950501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.950518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.950676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.950692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.950938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.950954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.951116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.951131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.951295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.951311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.951481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.951497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.951713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.951728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.951964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.951979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.952191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.952207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.952421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.952439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.952631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.952647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.952819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.952835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.952925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.952940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.953053] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.696 [2024-12-13 09:37:32.953192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.953207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.953433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.953454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.953606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.953621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.953856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.953871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.953968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.953983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.954188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.954203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.954428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.954443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.954720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.954736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.954887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.954903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.955137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.955152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.955323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.955339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.955498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.955514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.955668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.955683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.955939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.955954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.956194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.956210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.956438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.956457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.956714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.956730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.956958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.956973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.957118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.957133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.957343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.957359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.957582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.957598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.957817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.957839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.958008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.958024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.958254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.958273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.958485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.958503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.696 [2024-12-13 09:37:32.958671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.958689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 [2024-12-13 09:37:32.958792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.696 [2024-12-13 09:37:32.958808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.696 qpair failed and we were unable to recover it. 00:26:20.696 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:20.697 [2024-12-13 09:37:32.958965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.958982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.959082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.959097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.697 [2024-12-13 09:37:32.959334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.959351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.959440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.959462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.697 [2024-12-13 09:37:32.959695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.959713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.959813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.959827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.960054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.960070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.960314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.960330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.960486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.960503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.960734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.960750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.960939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.960954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.961120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.961135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.961363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.961378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.961523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.961539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.961765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.961780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.961931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.961946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.962127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.962143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.962409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.962424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.962646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.962662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.962898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.962914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.963096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.963111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.963274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.963289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.963533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.963549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.963779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.963794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.963944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.963959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.964124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.964140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.964227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.964243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.964394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.964410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.964565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.964581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.964722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.964738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.964890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.964905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.965061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.965076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.965303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.965318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.965460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.965476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.965689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.965705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.965934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.965952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.966215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.966232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 [2024-12-13 09:37:32.966334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.966351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.697 [2024-12-13 09:37:32.966563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.966580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:20.697 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.697 [2024-12-13 09:37:32.966814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.697 [2024-12-13 09:37:32.966830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.697 qpair failed and we were unable to recover it. 00:26:20.697 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.697 [2024-12-13 09:37:32.967046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.967064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.967320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.967335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.967589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.967606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.967757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.967772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.967925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.967940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.968191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.968207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.968439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.968471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.968615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.968631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.968790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.968805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.968958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.968973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.969177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.969192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.969412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.969427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.969638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.969654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.969811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.969826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.970040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.970055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.970272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.970287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.970440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.970462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.970622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.970638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.970808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.970823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.971029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.971044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.971210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.971225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.971338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.971353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.971559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.971576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.971734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.971749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.971858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.971873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.972030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.972046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.972126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.972141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.972377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.972393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.972622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.972638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.972898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.972914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.973076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.973091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.973295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.973311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.973523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.973539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.973750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.973765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 [2024-12-13 09:37:32.973998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.974015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.698 [2024-12-13 09:37:32.974176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.974192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.698 qpair failed and we were unable to recover it. 00:26:20.698 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.698 [2024-12-13 09:37:32.974421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.698 [2024-12-13 09:37:32.974438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.699 [2024-12-13 09:37:32.974595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.974611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.974857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.974874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.975052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.975068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.975217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.975233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.975464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.975480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.975714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.975730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.975962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.975977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.976210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.976225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.976432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.976451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.976606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.976622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.976840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.976863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.977102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.977117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.977224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.977240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.977443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.977464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.977695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.977717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.977883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.699 [2024-12-13 09:37:32.977899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb121a0 with addr=10.0.0.2, port=4420 00:26:20.699 qpair failed and we were unable to recover it. 00:26:20.699 [2024-12-13 09:37:32.978007] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.959 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.959 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:20.959 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.959 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:20.959 [2024-12-13 09:37:32.983737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.959 [2024-12-13 09:37:32.983837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.959 [2024-12-13 09:37:32.983871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.959 [2024-12-13 09:37:32.983889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.959 [2024-12-13 09:37:32.983905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.959 [2024-12-13 09:37:32.983945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.959 qpair failed and we were unable to recover it. 00:26:20.959 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.959 09:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3480640 00:26:20.959 [2024-12-13 09:37:32.993644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.959 [2024-12-13 09:37:32.993731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.959 [2024-12-13 09:37:32.993750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.959 [2024-12-13 09:37:32.993757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.959 [2024-12-13 09:37:32.993764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.960 [2024-12-13 09:37:32.993782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.960 qpair failed and we were unable to recover it. 00:26:20.960 [2024-12-13 09:37:33.003648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.960 [2024-12-13 09:37:33.003716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.960 [2024-12-13 09:37:33.003731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.960 [2024-12-13 09:37:33.003738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.960 [2024-12-13 09:37:33.003745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.960 [2024-12-13 09:37:33.003761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.960 qpair failed and we were unable to recover it. 00:26:20.960 [2024-12-13 09:37:33.013676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.960 [2024-12-13 09:37:33.013751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.960 [2024-12-13 09:37:33.013765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.960 [2024-12-13 09:37:33.013772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.960 [2024-12-13 09:37:33.013778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.960 [2024-12-13 09:37:33.013793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.960 qpair failed and we were unable to recover it. 00:26:20.960 [2024-12-13 09:37:33.023652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.960 [2024-12-13 09:37:33.023713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.960 [2024-12-13 09:37:33.023728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.960 [2024-12-13 09:37:33.023735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.960 [2024-12-13 09:37:33.023741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.960 [2024-12-13 09:37:33.023757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.960 qpair failed and we were unable to recover it. 00:26:20.960 [2024-12-13 09:37:33.033669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.960 [2024-12-13 09:37:33.033731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.960 [2024-12-13 09:37:33.033748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.960 [2024-12-13 09:37:33.033755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.960 [2024-12-13 09:37:33.033761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.960 [2024-12-13 09:37:33.033775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.960 qpair failed and we were unable to recover it. 00:26:20.960 [2024-12-13 09:37:33.043670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.960 [2024-12-13 09:37:33.043724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.960 [2024-12-13 09:37:33.043738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.960 [2024-12-13 09:37:33.043744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.960 [2024-12-13 09:37:33.043750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.960 [2024-12-13 09:37:33.043764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.960 qpair failed and we were unable to recover it. 00:26:20.960 [2024-12-13 09:37:33.053748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.960 [2024-12-13 09:37:33.053852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.960 [2024-12-13 09:37:33.053868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.960 [2024-12-13 09:37:33.053875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.960 [2024-12-13 09:37:33.053881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.960 [2024-12-13 09:37:33.053896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.960 qpair failed and we were unable to recover it. 00:26:20.960 [2024-12-13 09:37:33.063756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.960 [2024-12-13 09:37:33.063816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.960 [2024-12-13 09:37:33.063832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.960 [2024-12-13 09:37:33.063838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.960 [2024-12-13 09:37:33.063844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.960 [2024-12-13 09:37:33.063859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.960 qpair failed and we were unable to recover it. 00:26:20.960 [2024-12-13 09:37:33.073767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.960 [2024-12-13 09:37:33.073861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.960 [2024-12-13 09:37:33.073877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.960 [2024-12-13 09:37:33.073884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.960 [2024-12-13 09:37:33.073890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.960 [2024-12-13 09:37:33.073910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.960 qpair failed and we were unable to recover it. 00:26:20.960 [2024-12-13 09:37:33.083795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.960 [2024-12-13 09:37:33.083853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.960 [2024-12-13 09:37:33.083867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.960 [2024-12-13 09:37:33.083873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.960 [2024-12-13 09:37:33.083879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.960 [2024-12-13 09:37:33.083894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.960 qpair failed and we were unable to recover it. 00:26:20.960 [2024-12-13 09:37:33.093811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.960 [2024-12-13 09:37:33.093877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.960 [2024-12-13 09:37:33.093891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.960 [2024-12-13 09:37:33.093898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.960 [2024-12-13 09:37:33.093904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.960 [2024-12-13 09:37:33.093918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.960 qpair failed and we were unable to recover it. 00:26:20.960 [2024-12-13 09:37:33.103887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.960 [2024-12-13 09:37:33.103948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.960 [2024-12-13 09:37:33.103961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.960 [2024-12-13 09:37:33.103967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.960 [2024-12-13 09:37:33.103973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.960 [2024-12-13 09:37:33.103988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.960 qpair failed and we were unable to recover it. 00:26:20.960 [2024-12-13 09:37:33.113859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.960 [2024-12-13 09:37:33.113915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.960 [2024-12-13 09:37:33.113928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.960 [2024-12-13 09:37:33.113935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.960 [2024-12-13 09:37:33.113941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.960 [2024-12-13 09:37:33.113956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.960 qpair failed and we were unable to recover it. 00:26:20.960 [2024-12-13 09:37:33.123942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.960 [2024-12-13 09:37:33.124005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.960 [2024-12-13 09:37:33.124019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.960 [2024-12-13 09:37:33.124025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.960 [2024-12-13 09:37:33.124031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.960 [2024-12-13 09:37:33.124046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.960 qpair failed and we were unable to recover it. 00:26:20.960 [2024-12-13 09:37:33.133932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.961 [2024-12-13 09:37:33.133990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.961 [2024-12-13 09:37:33.134004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.961 [2024-12-13 09:37:33.134011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.961 [2024-12-13 09:37:33.134017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.961 [2024-12-13 09:37:33.134031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.961 qpair failed and we were unable to recover it. 00:26:20.961 [2024-12-13 09:37:33.143954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.961 [2024-12-13 09:37:33.144009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.961 [2024-12-13 09:37:33.144023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.961 [2024-12-13 09:37:33.144030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.961 [2024-12-13 09:37:33.144035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.961 [2024-12-13 09:37:33.144049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.961 qpair failed and we were unable to recover it. 00:26:20.961 [2024-12-13 09:37:33.153980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.961 [2024-12-13 09:37:33.154035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.961 [2024-12-13 09:37:33.154049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.961 [2024-12-13 09:37:33.154055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.961 [2024-12-13 09:37:33.154061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.961 [2024-12-13 09:37:33.154075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.961 qpair failed and we were unable to recover it. 00:26:20.961 [2024-12-13 09:37:33.164045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.961 [2024-12-13 09:37:33.164103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.961 [2024-12-13 09:37:33.164119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.961 [2024-12-13 09:37:33.164126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.961 [2024-12-13 09:37:33.164132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.961 [2024-12-13 09:37:33.164147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.961 qpair failed and we were unable to recover it. 00:26:20.961 [2024-12-13 09:37:33.174058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.961 [2024-12-13 09:37:33.174131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.961 [2024-12-13 09:37:33.174145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.961 [2024-12-13 09:37:33.174152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.961 [2024-12-13 09:37:33.174158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.961 [2024-12-13 09:37:33.174172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.961 qpair failed and we were unable to recover it. 00:26:20.961 [2024-12-13 09:37:33.184059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.961 [2024-12-13 09:37:33.184115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.961 [2024-12-13 09:37:33.184128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.961 [2024-12-13 09:37:33.184135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.961 [2024-12-13 09:37:33.184140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.961 [2024-12-13 09:37:33.184154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.961 qpair failed and we were unable to recover it. 00:26:20.961 [2024-12-13 09:37:33.194109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.961 [2024-12-13 09:37:33.194168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.961 [2024-12-13 09:37:33.194181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.961 [2024-12-13 09:37:33.194187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.961 [2024-12-13 09:37:33.194193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.961 [2024-12-13 09:37:33.194207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.961 qpair failed and we were unable to recover it. 00:26:20.961 [2024-12-13 09:37:33.204123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.961 [2024-12-13 09:37:33.204180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.961 [2024-12-13 09:37:33.204194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.961 [2024-12-13 09:37:33.204200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.961 [2024-12-13 09:37:33.204206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.961 [2024-12-13 09:37:33.204224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.961 qpair failed and we were unable to recover it. 00:26:20.961 [2024-12-13 09:37:33.214191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.961 [2024-12-13 09:37:33.214255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.961 [2024-12-13 09:37:33.214268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.961 [2024-12-13 09:37:33.214275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.961 [2024-12-13 09:37:33.214281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.961 [2024-12-13 09:37:33.214295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.961 qpair failed and we were unable to recover it. 00:26:20.961 [2024-12-13 09:37:33.224160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.961 [2024-12-13 09:37:33.224231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.961 [2024-12-13 09:37:33.224245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.961 [2024-12-13 09:37:33.224252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.961 [2024-12-13 09:37:33.224258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.961 [2024-12-13 09:37:33.224273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.961 qpair failed and we were unable to recover it. 00:26:20.961 [2024-12-13 09:37:33.234214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.961 [2024-12-13 09:37:33.234267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.961 [2024-12-13 09:37:33.234281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.961 [2024-12-13 09:37:33.234288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.961 [2024-12-13 09:37:33.234294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.961 [2024-12-13 09:37:33.234308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.961 qpair failed and we were unable to recover it. 00:26:20.961 [2024-12-13 09:37:33.244236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.961 [2024-12-13 09:37:33.244315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.961 [2024-12-13 09:37:33.244330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.961 [2024-12-13 09:37:33.244337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.961 [2024-12-13 09:37:33.244343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.961 [2024-12-13 09:37:33.244357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.961 qpair failed and we were unable to recover it. 00:26:20.961 [2024-12-13 09:37:33.254284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.961 [2024-12-13 09:37:33.254351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.961 [2024-12-13 09:37:33.254365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.961 [2024-12-13 09:37:33.254371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.961 [2024-12-13 09:37:33.254378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.961 [2024-12-13 09:37:33.254393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.961 qpair failed and we were unable to recover it. 00:26:20.961 [2024-12-13 09:37:33.264322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.961 [2024-12-13 09:37:33.264383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.961 [2024-12-13 09:37:33.264397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.961 [2024-12-13 09:37:33.264403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.961 [2024-12-13 09:37:33.264409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.962 [2024-12-13 09:37:33.264423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.962 qpair failed and we were unable to recover it. 00:26:20.962 [2024-12-13 09:37:33.274337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.962 [2024-12-13 09:37:33.274393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.962 [2024-12-13 09:37:33.274406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.962 [2024-12-13 09:37:33.274413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.962 [2024-12-13 09:37:33.274419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.962 [2024-12-13 09:37:33.274433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.962 qpair failed and we were unable to recover it. 00:26:20.962 [2024-12-13 09:37:33.284356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.962 [2024-12-13 09:37:33.284411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.962 [2024-12-13 09:37:33.284425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.962 [2024-12-13 09:37:33.284431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.962 [2024-12-13 09:37:33.284437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.962 [2024-12-13 09:37:33.284455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.962 qpair failed and we were unable to recover it. 00:26:20.962 [2024-12-13 09:37:33.294326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.962 [2024-12-13 09:37:33.294387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.962 [2024-12-13 09:37:33.294403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.962 [2024-12-13 09:37:33.294409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.962 [2024-12-13 09:37:33.294415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.962 [2024-12-13 09:37:33.294429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.962 qpair failed and we were unable to recover it. 00:26:20.962 [2024-12-13 09:37:33.304423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.962 [2024-12-13 09:37:33.304486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.962 [2024-12-13 09:37:33.304499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.962 [2024-12-13 09:37:33.304506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.962 [2024-12-13 09:37:33.304512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.962 [2024-12-13 09:37:33.304526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.962 qpair failed and we were unable to recover it. 00:26:20.962 [2024-12-13 09:37:33.314451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.962 [2024-12-13 09:37:33.314507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.962 [2024-12-13 09:37:33.314520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.962 [2024-12-13 09:37:33.314527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.962 [2024-12-13 09:37:33.314532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.962 [2024-12-13 09:37:33.314546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.962 qpair failed and we were unable to recover it. 00:26:20.962 [2024-12-13 09:37:33.324522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.962 [2024-12-13 09:37:33.324581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.962 [2024-12-13 09:37:33.324597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.962 [2024-12-13 09:37:33.324604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.962 [2024-12-13 09:37:33.324611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:20.962 [2024-12-13 09:37:33.324627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.962 qpair failed and we were unable to recover it. 00:26:21.222 [2024-12-13 09:37:33.334530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.222 [2024-12-13 09:37:33.334589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.222 [2024-12-13 09:37:33.334606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.222 [2024-12-13 09:37:33.334612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.222 [2024-12-13 09:37:33.334618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.222 [2024-12-13 09:37:33.334637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.222 qpair failed and we were unable to recover it. 00:26:21.222 [2024-12-13 09:37:33.344509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.222 [2024-12-13 09:37:33.344568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.222 [2024-12-13 09:37:33.344582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.222 [2024-12-13 09:37:33.344588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.222 [2024-12-13 09:37:33.344594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.222 [2024-12-13 09:37:33.344609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.222 qpair failed and we were unable to recover it. 00:26:21.222 [2024-12-13 09:37:33.354553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.222 [2024-12-13 09:37:33.354601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.222 [2024-12-13 09:37:33.354614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.222 [2024-12-13 09:37:33.354621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.222 [2024-12-13 09:37:33.354627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.222 [2024-12-13 09:37:33.354641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.222 qpair failed and we were unable to recover it. 00:26:21.222 [2024-12-13 09:37:33.364575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.222 [2024-12-13 09:37:33.364628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.222 [2024-12-13 09:37:33.364641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.222 [2024-12-13 09:37:33.364648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.222 [2024-12-13 09:37:33.364654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.222 [2024-12-13 09:37:33.364668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.222 qpair failed and we were unable to recover it. 00:26:21.222 [2024-12-13 09:37:33.374618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.222 [2024-12-13 09:37:33.374676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.222 [2024-12-13 09:37:33.374690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.222 [2024-12-13 09:37:33.374697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.222 [2024-12-13 09:37:33.374703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.222 [2024-12-13 09:37:33.374717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.222 qpair failed and we were unable to recover it. 00:26:21.222 [2024-12-13 09:37:33.384639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.222 [2024-12-13 09:37:33.384699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.222 [2024-12-13 09:37:33.384712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.223 [2024-12-13 09:37:33.384718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.223 [2024-12-13 09:37:33.384724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.223 [2024-12-13 09:37:33.384739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.223 qpair failed and we were unable to recover it. 00:26:21.223 [2024-12-13 09:37:33.394594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.223 [2024-12-13 09:37:33.394646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.223 [2024-12-13 09:37:33.394660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.223 [2024-12-13 09:37:33.394666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.223 [2024-12-13 09:37:33.394672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.223 [2024-12-13 09:37:33.394686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.223 qpair failed and we were unable to recover it. 00:26:21.223 [2024-12-13 09:37:33.404693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.223 [2024-12-13 09:37:33.404750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.223 [2024-12-13 09:37:33.404764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.223 [2024-12-13 09:37:33.404770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.223 [2024-12-13 09:37:33.404776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.223 [2024-12-13 09:37:33.404790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.223 qpair failed and we were unable to recover it. 00:26:21.223 [2024-12-13 09:37:33.414732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.223 [2024-12-13 09:37:33.414788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.223 [2024-12-13 09:37:33.414801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.223 [2024-12-13 09:37:33.414808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.223 [2024-12-13 09:37:33.414813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.223 [2024-12-13 09:37:33.414828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.223 qpair failed and we were unable to recover it. 00:26:21.223 [2024-12-13 09:37:33.424805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.223 [2024-12-13 09:37:33.424870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.223 [2024-12-13 09:37:33.424886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.223 [2024-12-13 09:37:33.424893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.223 [2024-12-13 09:37:33.424899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.223 [2024-12-13 09:37:33.424914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.223 qpair failed and we were unable to recover it. 00:26:21.223 [2024-12-13 09:37:33.434771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.223 [2024-12-13 09:37:33.434823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.223 [2024-12-13 09:37:33.434837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.223 [2024-12-13 09:37:33.434844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.223 [2024-12-13 09:37:33.434849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.223 [2024-12-13 09:37:33.434863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.223 qpair failed and we were unable to recover it. 00:26:21.223 [2024-12-13 09:37:33.444748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.223 [2024-12-13 09:37:33.444807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.223 [2024-12-13 09:37:33.444822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.223 [2024-12-13 09:37:33.444828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.223 [2024-12-13 09:37:33.444834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.223 [2024-12-13 09:37:33.444849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.223 qpair failed and we were unable to recover it. 00:26:21.223 [2024-12-13 09:37:33.454845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.223 [2024-12-13 09:37:33.454904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.223 [2024-12-13 09:37:33.454917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.223 [2024-12-13 09:37:33.454925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.223 [2024-12-13 09:37:33.454930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.223 [2024-12-13 09:37:33.454944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.223 qpair failed and we were unable to recover it. 00:26:21.223 [2024-12-13 09:37:33.464863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.223 [2024-12-13 09:37:33.464925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.223 [2024-12-13 09:37:33.464939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.223 [2024-12-13 09:37:33.464946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.223 [2024-12-13 09:37:33.464952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.223 [2024-12-13 09:37:33.464970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.223 qpair failed and we were unable to recover it. 00:26:21.223 [2024-12-13 09:37:33.474908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.223 [2024-12-13 09:37:33.474964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.223 [2024-12-13 09:37:33.474978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.223 [2024-12-13 09:37:33.474985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.223 [2024-12-13 09:37:33.474991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.223 [2024-12-13 09:37:33.475006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.223 qpair failed and we were unable to recover it. 00:26:21.223 [2024-12-13 09:37:33.484923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.223 [2024-12-13 09:37:33.484979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.223 [2024-12-13 09:37:33.484993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.223 [2024-12-13 09:37:33.484999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.223 [2024-12-13 09:37:33.485005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.223 [2024-12-13 09:37:33.485019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.223 qpair failed and we were unable to recover it. 00:26:21.223 [2024-12-13 09:37:33.494994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.223 [2024-12-13 09:37:33.495052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.223 [2024-12-13 09:37:33.495066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.223 [2024-12-13 09:37:33.495073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.223 [2024-12-13 09:37:33.495078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.223 [2024-12-13 09:37:33.495093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.223 qpair failed and we were unable to recover it. 00:26:21.223 [2024-12-13 09:37:33.504973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.223 [2024-12-13 09:37:33.505032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.223 [2024-12-13 09:37:33.505046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.223 [2024-12-13 09:37:33.505053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.223 [2024-12-13 09:37:33.505058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.223 [2024-12-13 09:37:33.505073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.223 qpair failed and we were unable to recover it. 00:26:21.223 [2024-12-13 09:37:33.514989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.223 [2024-12-13 09:37:33.515067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.223 [2024-12-13 09:37:33.515082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.223 [2024-12-13 09:37:33.515088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.223 [2024-12-13 09:37:33.515094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.224 [2024-12-13 09:37:33.515109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.224 qpair failed and we were unable to recover it. 00:26:21.224 [2024-12-13 09:37:33.525035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.224 [2024-12-13 09:37:33.525091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.224 [2024-12-13 09:37:33.525105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.224 [2024-12-13 09:37:33.525111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.224 [2024-12-13 09:37:33.525117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.224 [2024-12-13 09:37:33.525131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.224 qpair failed and we were unable to recover it. 00:26:21.224 [2024-12-13 09:37:33.535091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.224 [2024-12-13 09:37:33.535149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.224 [2024-12-13 09:37:33.535162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.224 [2024-12-13 09:37:33.535169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.224 [2024-12-13 09:37:33.535174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.224 [2024-12-13 09:37:33.535188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.224 qpair failed and we were unable to recover it. 00:26:21.224 [2024-12-13 09:37:33.545118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.224 [2024-12-13 09:37:33.545180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.224 [2024-12-13 09:37:33.545193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.224 [2024-12-13 09:37:33.545199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.224 [2024-12-13 09:37:33.545205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.224 [2024-12-13 09:37:33.545220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.224 qpair failed and we were unable to recover it. 00:26:21.224 [2024-12-13 09:37:33.555117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.224 [2024-12-13 09:37:33.555170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.224 [2024-12-13 09:37:33.555186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.224 [2024-12-13 09:37:33.555193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.224 [2024-12-13 09:37:33.555198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.224 [2024-12-13 09:37:33.555213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.224 qpair failed and we were unable to recover it. 00:26:21.224 [2024-12-13 09:37:33.565193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.224 [2024-12-13 09:37:33.565259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.224 [2024-12-13 09:37:33.565272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.224 [2024-12-13 09:37:33.565279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.224 [2024-12-13 09:37:33.565285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.224 [2024-12-13 09:37:33.565299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.224 qpair failed and we were unable to recover it. 00:26:21.224 [2024-12-13 09:37:33.575194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.224 [2024-12-13 09:37:33.575251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.224 [2024-12-13 09:37:33.575265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.224 [2024-12-13 09:37:33.575272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.224 [2024-12-13 09:37:33.575278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.224 [2024-12-13 09:37:33.575292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.224 qpair failed and we were unable to recover it. 00:26:21.224 [2024-12-13 09:37:33.585208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.224 [2024-12-13 09:37:33.585271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.224 [2024-12-13 09:37:33.585289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.224 [2024-12-13 09:37:33.585296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.224 [2024-12-13 09:37:33.585303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.224 [2024-12-13 09:37:33.585320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.224 qpair failed and we were unable to recover it. 00:26:21.484 [2024-12-13 09:37:33.595219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.484 [2024-12-13 09:37:33.595275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.484 [2024-12-13 09:37:33.595293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.484 [2024-12-13 09:37:33.595301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.484 [2024-12-13 09:37:33.595307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.484 [2024-12-13 09:37:33.595327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.484 qpair failed and we were unable to recover it. 00:26:21.484 [2024-12-13 09:37:33.605271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.484 [2024-12-13 09:37:33.605324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.484 [2024-12-13 09:37:33.605338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.484 [2024-12-13 09:37:33.605345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.484 [2024-12-13 09:37:33.605351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.484 [2024-12-13 09:37:33.605366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.484 qpair failed and we were unable to recover it. 00:26:21.484 [2024-12-13 09:37:33.615313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.484 [2024-12-13 09:37:33.615373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.484 [2024-12-13 09:37:33.615387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.484 [2024-12-13 09:37:33.615393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.484 [2024-12-13 09:37:33.615399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.484 [2024-12-13 09:37:33.615413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.484 qpair failed and we were unable to recover it. 00:26:21.484 [2024-12-13 09:37:33.625336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.484 [2024-12-13 09:37:33.625392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.484 [2024-12-13 09:37:33.625406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.484 [2024-12-13 09:37:33.625413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.484 [2024-12-13 09:37:33.625419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.484 [2024-12-13 09:37:33.625433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.484 qpair failed and we were unable to recover it. 00:26:21.484 [2024-12-13 09:37:33.635357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.484 [2024-12-13 09:37:33.635417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.484 [2024-12-13 09:37:33.635430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.484 [2024-12-13 09:37:33.635436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.484 [2024-12-13 09:37:33.635442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.484 [2024-12-13 09:37:33.635460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.484 qpair failed and we were unable to recover it. 00:26:21.484 [2024-12-13 09:37:33.645383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.484 [2024-12-13 09:37:33.645455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.484 [2024-12-13 09:37:33.645469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.484 [2024-12-13 09:37:33.645476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.484 [2024-12-13 09:37:33.645482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.484 [2024-12-13 09:37:33.645496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.484 qpair failed and we were unable to recover it. 00:26:21.484 [2024-12-13 09:37:33.655422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.484 [2024-12-13 09:37:33.655533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.484 [2024-12-13 09:37:33.655548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.484 [2024-12-13 09:37:33.655555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.484 [2024-12-13 09:37:33.655560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.484 [2024-12-13 09:37:33.655575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.484 qpair failed and we were unable to recover it. 00:26:21.484 [2024-12-13 09:37:33.665475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.484 [2024-12-13 09:37:33.665531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.485 [2024-12-13 09:37:33.665545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.485 [2024-12-13 09:37:33.665552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.485 [2024-12-13 09:37:33.665558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.485 [2024-12-13 09:37:33.665572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.485 qpair failed and we were unable to recover it. 00:26:21.485 [2024-12-13 09:37:33.675472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.485 [2024-12-13 09:37:33.675529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.485 [2024-12-13 09:37:33.675543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.485 [2024-12-13 09:37:33.675549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.485 [2024-12-13 09:37:33.675557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.485 [2024-12-13 09:37:33.675571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.485 qpair failed and we were unable to recover it. 00:26:21.485 [2024-12-13 09:37:33.685524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.485 [2024-12-13 09:37:33.685585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.485 [2024-12-13 09:37:33.685603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.485 [2024-12-13 09:37:33.685611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.485 [2024-12-13 09:37:33.685617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.485 [2024-12-13 09:37:33.685631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.485 qpair failed and we were unable to recover it. 00:26:21.485 [2024-12-13 09:37:33.695565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.485 [2024-12-13 09:37:33.695632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.485 [2024-12-13 09:37:33.695645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.485 [2024-12-13 09:37:33.695652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.485 [2024-12-13 09:37:33.695658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.485 [2024-12-13 09:37:33.695674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.485 qpair failed and we were unable to recover it. 00:26:21.485 [2024-12-13 09:37:33.705518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.485 [2024-12-13 09:37:33.705570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.485 [2024-12-13 09:37:33.705583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.485 [2024-12-13 09:37:33.705590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.485 [2024-12-13 09:37:33.705596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.485 [2024-12-13 09:37:33.705610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.485 qpair failed and we were unable to recover it. 00:26:21.485 [2024-12-13 09:37:33.715631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.485 [2024-12-13 09:37:33.715699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.485 [2024-12-13 09:37:33.715712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.485 [2024-12-13 09:37:33.715718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.485 [2024-12-13 09:37:33.715725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.485 [2024-12-13 09:37:33.715739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.485 qpair failed and we were unable to recover it. 00:26:21.485 [2024-12-13 09:37:33.725637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.485 [2024-12-13 09:37:33.725708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.485 [2024-12-13 09:37:33.725721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.485 [2024-12-13 09:37:33.725728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.485 [2024-12-13 09:37:33.725734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.485 [2024-12-13 09:37:33.725751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.485 qpair failed and we were unable to recover it. 00:26:21.485 [2024-12-13 09:37:33.735605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.485 [2024-12-13 09:37:33.735667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.485 [2024-12-13 09:37:33.735681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.485 [2024-12-13 09:37:33.735687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.485 [2024-12-13 09:37:33.735693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.485 [2024-12-13 09:37:33.735707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.485 qpair failed and we were unable to recover it. 00:26:21.485 [2024-12-13 09:37:33.745651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.485 [2024-12-13 09:37:33.745750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.485 [2024-12-13 09:37:33.745764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.485 [2024-12-13 09:37:33.745771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.485 [2024-12-13 09:37:33.745777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.485 [2024-12-13 09:37:33.745791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.485 qpair failed and we were unable to recover it. 00:26:21.485 [2024-12-13 09:37:33.755679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.485 [2024-12-13 09:37:33.755731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.485 [2024-12-13 09:37:33.755744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.485 [2024-12-13 09:37:33.755751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.485 [2024-12-13 09:37:33.755757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.485 [2024-12-13 09:37:33.755770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.485 qpair failed and we were unable to recover it. 00:26:21.485 [2024-12-13 09:37:33.765675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.485 [2024-12-13 09:37:33.765733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.485 [2024-12-13 09:37:33.765746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.485 [2024-12-13 09:37:33.765753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.485 [2024-12-13 09:37:33.765758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.485 [2024-12-13 09:37:33.765772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.485 qpair failed and we were unable to recover it. 00:26:21.485 [2024-12-13 09:37:33.775817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.485 [2024-12-13 09:37:33.775930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.485 [2024-12-13 09:37:33.775945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.485 [2024-12-13 09:37:33.775952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.485 [2024-12-13 09:37:33.775958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.485 [2024-12-13 09:37:33.775972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.485 qpair failed and we were unable to recover it. 00:26:21.485 [2024-12-13 09:37:33.785747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.485 [2024-12-13 09:37:33.785805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.485 [2024-12-13 09:37:33.785819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.485 [2024-12-13 09:37:33.785826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.485 [2024-12-13 09:37:33.785832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.485 [2024-12-13 09:37:33.785846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.485 qpair failed and we were unable to recover it. 00:26:21.485 [2024-12-13 09:37:33.795858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.485 [2024-12-13 09:37:33.795908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.485 [2024-12-13 09:37:33.795922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.485 [2024-12-13 09:37:33.795929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.485 [2024-12-13 09:37:33.795935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.486 [2024-12-13 09:37:33.795949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.486 qpair failed and we were unable to recover it. 00:26:21.486 [2024-12-13 09:37:33.805879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.486 [2024-12-13 09:37:33.805933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.486 [2024-12-13 09:37:33.805946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.486 [2024-12-13 09:37:33.805953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.486 [2024-12-13 09:37:33.805959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.486 [2024-12-13 09:37:33.805973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.486 qpair failed and we were unable to recover it. 00:26:21.486 [2024-12-13 09:37:33.815806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.486 [2024-12-13 09:37:33.815865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.486 [2024-12-13 09:37:33.815882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.486 [2024-12-13 09:37:33.815888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.486 [2024-12-13 09:37:33.815895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.486 [2024-12-13 09:37:33.815909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.486 qpair failed and we were unable to recover it. 00:26:21.486 [2024-12-13 09:37:33.825908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.486 [2024-12-13 09:37:33.825979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.486 [2024-12-13 09:37:33.825992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.486 [2024-12-13 09:37:33.825999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.486 [2024-12-13 09:37:33.826005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.486 [2024-12-13 09:37:33.826020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.486 qpair failed and we were unable to recover it. 00:26:21.486 [2024-12-13 09:37:33.835831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.486 [2024-12-13 09:37:33.835883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.486 [2024-12-13 09:37:33.835897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.486 [2024-12-13 09:37:33.835904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.486 [2024-12-13 09:37:33.835910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.486 [2024-12-13 09:37:33.835924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.486 qpair failed and we were unable to recover it. 00:26:21.486 [2024-12-13 09:37:33.845963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.486 [2024-12-13 09:37:33.846017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.486 [2024-12-13 09:37:33.846033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.486 [2024-12-13 09:37:33.846040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.486 [2024-12-13 09:37:33.846046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.486 [2024-12-13 09:37:33.846061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.486 qpair failed and we were unable to recover it. 00:26:21.746 [2024-12-13 09:37:33.855949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.746 [2024-12-13 09:37:33.856005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.746 [2024-12-13 09:37:33.856023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.746 [2024-12-13 09:37:33.856030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.746 [2024-12-13 09:37:33.856036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.746 [2024-12-13 09:37:33.856054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-12-13 09:37:33.865975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.746 [2024-12-13 09:37:33.866029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.746 [2024-12-13 09:37:33.866043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.746 [2024-12-13 09:37:33.866050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.746 [2024-12-13 09:37:33.866055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.746 [2024-12-13 09:37:33.866069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-12-13 09:37:33.875959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.746 [2024-12-13 09:37:33.876013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.746 [2024-12-13 09:37:33.876028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.746 [2024-12-13 09:37:33.876035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.746 [2024-12-13 09:37:33.876041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.746 [2024-12-13 09:37:33.876055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-12-13 09:37:33.886050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.746 [2024-12-13 09:37:33.886103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.746 [2024-12-13 09:37:33.886117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.746 [2024-12-13 09:37:33.886124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.746 [2024-12-13 09:37:33.886130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.746 [2024-12-13 09:37:33.886143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-12-13 09:37:33.896129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.746 [2024-12-13 09:37:33.896189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.746 [2024-12-13 09:37:33.896203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.746 [2024-12-13 09:37:33.896210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.746 [2024-12-13 09:37:33.896216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.746 [2024-12-13 09:37:33.896232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-12-13 09:37:33.906049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.746 [2024-12-13 09:37:33.906107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.746 [2024-12-13 09:37:33.906120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.746 [2024-12-13 09:37:33.906127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.746 [2024-12-13 09:37:33.906132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.746 [2024-12-13 09:37:33.906147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-12-13 09:37:33.916070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.746 [2024-12-13 09:37:33.916129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.746 [2024-12-13 09:37:33.916142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.746 [2024-12-13 09:37:33.916148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.746 [2024-12-13 09:37:33.916155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.746 [2024-12-13 09:37:33.916169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-12-13 09:37:33.926169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.746 [2024-12-13 09:37:33.926226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.746 [2024-12-13 09:37:33.926240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.747 [2024-12-13 09:37:33.926247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.747 [2024-12-13 09:37:33.926253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.747 [2024-12-13 09:37:33.926266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-12-13 09:37:33.936191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.747 [2024-12-13 09:37:33.936251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.747 [2024-12-13 09:37:33.936265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.747 [2024-12-13 09:37:33.936272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.747 [2024-12-13 09:37:33.936277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.747 [2024-12-13 09:37:33.936292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-12-13 09:37:33.946206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.747 [2024-12-13 09:37:33.946292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.747 [2024-12-13 09:37:33.946310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.747 [2024-12-13 09:37:33.946317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.747 [2024-12-13 09:37:33.946323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.747 [2024-12-13 09:37:33.946338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-12-13 09:37:33.956292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.747 [2024-12-13 09:37:33.956358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.747 [2024-12-13 09:37:33.956372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.747 [2024-12-13 09:37:33.956378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.747 [2024-12-13 09:37:33.956385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.747 [2024-12-13 09:37:33.956399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-12-13 09:37:33.966319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.747 [2024-12-13 09:37:33.966378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.747 [2024-12-13 09:37:33.966392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.747 [2024-12-13 09:37:33.966398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.747 [2024-12-13 09:37:33.966404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.747 [2024-12-13 09:37:33.966419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-12-13 09:37:33.976315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.747 [2024-12-13 09:37:33.976374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.747 [2024-12-13 09:37:33.976388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.747 [2024-12-13 09:37:33.976395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.747 [2024-12-13 09:37:33.976400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.747 [2024-12-13 09:37:33.976415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-12-13 09:37:33.986341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.747 [2024-12-13 09:37:33.986400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.747 [2024-12-13 09:37:33.986414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.747 [2024-12-13 09:37:33.986421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.747 [2024-12-13 09:37:33.986427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.747 [2024-12-13 09:37:33.986444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-12-13 09:37:33.996366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.747 [2024-12-13 09:37:33.996421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.747 [2024-12-13 09:37:33.996434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.747 [2024-12-13 09:37:33.996440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.747 [2024-12-13 09:37:33.996446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.747 [2024-12-13 09:37:33.996465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-12-13 09:37:34.006428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.747 [2024-12-13 09:37:34.006487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.747 [2024-12-13 09:37:34.006501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.747 [2024-12-13 09:37:34.006507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.747 [2024-12-13 09:37:34.006513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.747 [2024-12-13 09:37:34.006528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-12-13 09:37:34.016440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.747 [2024-12-13 09:37:34.016499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.747 [2024-12-13 09:37:34.016512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.747 [2024-12-13 09:37:34.016518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.747 [2024-12-13 09:37:34.016524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.747 [2024-12-13 09:37:34.016538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-12-13 09:37:34.026478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.747 [2024-12-13 09:37:34.026535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.747 [2024-12-13 09:37:34.026548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.747 [2024-12-13 09:37:34.026555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.747 [2024-12-13 09:37:34.026561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.747 [2024-12-13 09:37:34.026575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-12-13 09:37:34.036502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.747 [2024-12-13 09:37:34.036574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.747 [2024-12-13 09:37:34.036587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.747 [2024-12-13 09:37:34.036594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.747 [2024-12-13 09:37:34.036600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.747 [2024-12-13 09:37:34.036614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-12-13 09:37:34.046551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.747 [2024-12-13 09:37:34.046615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.747 [2024-12-13 09:37:34.046629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.747 [2024-12-13 09:37:34.046635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.747 [2024-12-13 09:37:34.046641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.747 [2024-12-13 09:37:34.046655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-12-13 09:37:34.056579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.747 [2024-12-13 09:37:34.056636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.747 [2024-12-13 09:37:34.056650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.747 [2024-12-13 09:37:34.056657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.747 [2024-12-13 09:37:34.056663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.748 [2024-12-13 09:37:34.056677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-12-13 09:37:34.066620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.748 [2024-12-13 09:37:34.066679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.748 [2024-12-13 09:37:34.066693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.748 [2024-12-13 09:37:34.066699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.748 [2024-12-13 09:37:34.066705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.748 [2024-12-13 09:37:34.066719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-12-13 09:37:34.076621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.748 [2024-12-13 09:37:34.076675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.748 [2024-12-13 09:37:34.076692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.748 [2024-12-13 09:37:34.076698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.748 [2024-12-13 09:37:34.076704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.748 [2024-12-13 09:37:34.076718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-12-13 09:37:34.086634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.748 [2024-12-13 09:37:34.086696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.748 [2024-12-13 09:37:34.086709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.748 [2024-12-13 09:37:34.086716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.748 [2024-12-13 09:37:34.086721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.748 [2024-12-13 09:37:34.086735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-12-13 09:37:34.096695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.748 [2024-12-13 09:37:34.096751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.748 [2024-12-13 09:37:34.096765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.748 [2024-12-13 09:37:34.096771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.748 [2024-12-13 09:37:34.096777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.748 [2024-12-13 09:37:34.096790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-12-13 09:37:34.106708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.748 [2024-12-13 09:37:34.106761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.748 [2024-12-13 09:37:34.106776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.748 [2024-12-13 09:37:34.106782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.748 [2024-12-13 09:37:34.106788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:21.748 [2024-12-13 09:37:34.106802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:21.748 qpair failed and we were unable to recover it. 00:26:22.008 [2024-12-13 09:37:34.116745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.008 [2024-12-13 09:37:34.116803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.008 [2024-12-13 09:37:34.116821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.008 [2024-12-13 09:37:34.116832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.008 [2024-12-13 09:37:34.116839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.008 [2024-12-13 09:37:34.116860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.008 qpair failed and we were unable to recover it. 00:26:22.008 [2024-12-13 09:37:34.126757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.008 [2024-12-13 09:37:34.126810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.008 [2024-12-13 09:37:34.126826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.008 [2024-12-13 09:37:34.126833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.008 [2024-12-13 09:37:34.126838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.008 [2024-12-13 09:37:34.126854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.008 qpair failed and we were unable to recover it. 00:26:22.008 [2024-12-13 09:37:34.136723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.008 [2024-12-13 09:37:34.136783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.008 [2024-12-13 09:37:34.136797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.008 [2024-12-13 09:37:34.136803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.008 [2024-12-13 09:37:34.136809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.008 [2024-12-13 09:37:34.136824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.008 qpair failed and we were unable to recover it. 00:26:22.008 [2024-12-13 09:37:34.146818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.008 [2024-12-13 09:37:34.146875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.008 [2024-12-13 09:37:34.146889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.008 [2024-12-13 09:37:34.146895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.008 [2024-12-13 09:37:34.146901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.008 [2024-12-13 09:37:34.146916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.008 qpair failed and we were unable to recover it. 00:26:22.008 [2024-12-13 09:37:34.156877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.008 [2024-12-13 09:37:34.156960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.008 [2024-12-13 09:37:34.156974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.008 [2024-12-13 09:37:34.156981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.008 [2024-12-13 09:37:34.156987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.008 [2024-12-13 09:37:34.157001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.008 qpair failed and we were unable to recover it. 00:26:22.008 [2024-12-13 09:37:34.166876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.008 [2024-12-13 09:37:34.166926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.008 [2024-12-13 09:37:34.166940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.008 [2024-12-13 09:37:34.166946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.008 [2024-12-13 09:37:34.166952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.008 [2024-12-13 09:37:34.166966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.008 qpair failed and we were unable to recover it. 00:26:22.008 [2024-12-13 09:37:34.176858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.008 [2024-12-13 09:37:34.176914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.008 [2024-12-13 09:37:34.176927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.008 [2024-12-13 09:37:34.176933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.008 [2024-12-13 09:37:34.176939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.008 [2024-12-13 09:37:34.176952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.008 qpair failed and we were unable to recover it. 00:26:22.008 [2024-12-13 09:37:34.186937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.008 [2024-12-13 09:37:34.186991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.008 [2024-12-13 09:37:34.187004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.008 [2024-12-13 09:37:34.187011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.008 [2024-12-13 09:37:34.187017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.008 [2024-12-13 09:37:34.187031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.008 qpair failed and we were unable to recover it. 00:26:22.008 [2024-12-13 09:37:34.196982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.009 [2024-12-13 09:37:34.197037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.009 [2024-12-13 09:37:34.197050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.009 [2024-12-13 09:37:34.197057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.009 [2024-12-13 09:37:34.197063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.009 [2024-12-13 09:37:34.197076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.009 qpair failed and we were unable to recover it. 00:26:22.009 [2024-12-13 09:37:34.206992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.009 [2024-12-13 09:37:34.207047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.009 [2024-12-13 09:37:34.207060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.009 [2024-12-13 09:37:34.207069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.009 [2024-12-13 09:37:34.207075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.009 [2024-12-13 09:37:34.207090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.009 qpair failed and we were unable to recover it. 00:26:22.009 [2024-12-13 09:37:34.217070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.009 [2024-12-13 09:37:34.217134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.009 [2024-12-13 09:37:34.217148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.009 [2024-12-13 09:37:34.217154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.009 [2024-12-13 09:37:34.217160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.009 [2024-12-13 09:37:34.217174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.009 qpair failed and we were unable to recover it. 00:26:22.009 [2024-12-13 09:37:34.227109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.009 [2024-12-13 09:37:34.227165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.009 [2024-12-13 09:37:34.227178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.009 [2024-12-13 09:37:34.227184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.009 [2024-12-13 09:37:34.227190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.009 [2024-12-13 09:37:34.227203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.009 qpair failed and we were unable to recover it. 00:26:22.009 [2024-12-13 09:37:34.237086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.009 [2024-12-13 09:37:34.237134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.009 [2024-12-13 09:37:34.237147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.009 [2024-12-13 09:37:34.237153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.009 [2024-12-13 09:37:34.237159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.009 [2024-12-13 09:37:34.237172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.009 qpair failed and we were unable to recover it. 00:26:22.009 [2024-12-13 09:37:34.247105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.009 [2024-12-13 09:37:34.247165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.009 [2024-12-13 09:37:34.247179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.009 [2024-12-13 09:37:34.247186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.009 [2024-12-13 09:37:34.247191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.009 [2024-12-13 09:37:34.247208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.009 qpair failed and we were unable to recover it. 00:26:22.009 [2024-12-13 09:37:34.257153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.009 [2024-12-13 09:37:34.257210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.009 [2024-12-13 09:37:34.257225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.009 [2024-12-13 09:37:34.257232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.009 [2024-12-13 09:37:34.257237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.009 [2024-12-13 09:37:34.257251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.009 qpair failed and we were unable to recover it. 00:26:22.009 [2024-12-13 09:37:34.267205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.009 [2024-12-13 09:37:34.267290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.009 [2024-12-13 09:37:34.267305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.009 [2024-12-13 09:37:34.267311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.009 [2024-12-13 09:37:34.267317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.009 [2024-12-13 09:37:34.267332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.009 qpair failed and we were unable to recover it. 00:26:22.009 [2024-12-13 09:37:34.277236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.009 [2024-12-13 09:37:34.277317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.009 [2024-12-13 09:37:34.277331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.009 [2024-12-13 09:37:34.277338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.009 [2024-12-13 09:37:34.277344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.009 [2024-12-13 09:37:34.277358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.009 qpair failed and we were unable to recover it. 00:26:22.009 [2024-12-13 09:37:34.287221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.009 [2024-12-13 09:37:34.287270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.009 [2024-12-13 09:37:34.287284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.009 [2024-12-13 09:37:34.287290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.009 [2024-12-13 09:37:34.287296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.009 [2024-12-13 09:37:34.287309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.009 qpair failed and we were unable to recover it. 00:26:22.009 [2024-12-13 09:37:34.297263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.009 [2024-12-13 09:37:34.297322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.009 [2024-12-13 09:37:34.297336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.009 [2024-12-13 09:37:34.297343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.009 [2024-12-13 09:37:34.297349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.009 [2024-12-13 09:37:34.297363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.009 qpair failed and we were unable to recover it. 00:26:22.009 [2024-12-13 09:37:34.307286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.009 [2024-12-13 09:37:34.307365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.009 [2024-12-13 09:37:34.307379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.009 [2024-12-13 09:37:34.307386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.009 [2024-12-13 09:37:34.307392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.009 [2024-12-13 09:37:34.307406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.009 qpair failed and we were unable to recover it. 00:26:22.009 [2024-12-13 09:37:34.317314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.009 [2024-12-13 09:37:34.317367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.009 [2024-12-13 09:37:34.317381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.009 [2024-12-13 09:37:34.317388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.009 [2024-12-13 09:37:34.317393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.009 [2024-12-13 09:37:34.317407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.009 qpair failed and we were unable to recover it. 00:26:22.009 [2024-12-13 09:37:34.327338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.009 [2024-12-13 09:37:34.327396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.009 [2024-12-13 09:37:34.327410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.009 [2024-12-13 09:37:34.327417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.010 [2024-12-13 09:37:34.327422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.010 [2024-12-13 09:37:34.327436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.010 qpair failed and we were unable to recover it. 00:26:22.010 [2024-12-13 09:37:34.337376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.010 [2024-12-13 09:37:34.337434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.010 [2024-12-13 09:37:34.337452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.010 [2024-12-13 09:37:34.337462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.010 [2024-12-13 09:37:34.337468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.010 [2024-12-13 09:37:34.337482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.010 qpair failed and we were unable to recover it. 00:26:22.010 [2024-12-13 09:37:34.347400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.010 [2024-12-13 09:37:34.347461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.010 [2024-12-13 09:37:34.347475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.010 [2024-12-13 09:37:34.347481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.010 [2024-12-13 09:37:34.347487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.010 [2024-12-13 09:37:34.347501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.010 qpair failed and we were unable to recover it. 00:26:22.010 [2024-12-13 09:37:34.357421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.010 [2024-12-13 09:37:34.357476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.010 [2024-12-13 09:37:34.357490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.010 [2024-12-13 09:37:34.357496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.010 [2024-12-13 09:37:34.357502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.010 [2024-12-13 09:37:34.357516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.010 qpair failed and we were unable to recover it. 00:26:22.010 [2024-12-13 09:37:34.367443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.010 [2024-12-13 09:37:34.367498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.010 [2024-12-13 09:37:34.367511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.010 [2024-12-13 09:37:34.367518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.010 [2024-12-13 09:37:34.367523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.010 [2024-12-13 09:37:34.367537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.010 qpair failed and we were unable to recover it. 00:26:22.270 [2024-12-13 09:37:34.377483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.270 [2024-12-13 09:37:34.377579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.270 [2024-12-13 09:37:34.377597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.270 [2024-12-13 09:37:34.377604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.270 [2024-12-13 09:37:34.377610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.270 [2024-12-13 09:37:34.377629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.270 qpair failed and we were unable to recover it. 00:26:22.270 [2024-12-13 09:37:34.387507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.270 [2024-12-13 09:37:34.387565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.270 [2024-12-13 09:37:34.387580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.270 [2024-12-13 09:37:34.387587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.270 [2024-12-13 09:37:34.387593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.270 [2024-12-13 09:37:34.387608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.270 qpair failed and we were unable to recover it. 00:26:22.270 [2024-12-13 09:37:34.397512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.270 [2024-12-13 09:37:34.397566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.270 [2024-12-13 09:37:34.397580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.270 [2024-12-13 09:37:34.397586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.270 [2024-12-13 09:37:34.397592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.270 [2024-12-13 09:37:34.397607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.270 qpair failed and we were unable to recover it. 00:26:22.270 [2024-12-13 09:37:34.407562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.270 [2024-12-13 09:37:34.407617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.270 [2024-12-13 09:37:34.407631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.270 [2024-12-13 09:37:34.407638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.270 [2024-12-13 09:37:34.407643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.270 [2024-12-13 09:37:34.407658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.270 qpair failed and we were unable to recover it. 00:26:22.270 [2024-12-13 09:37:34.417594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.270 [2024-12-13 09:37:34.417650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.270 [2024-12-13 09:37:34.417664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.270 [2024-12-13 09:37:34.417671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.270 [2024-12-13 09:37:34.417677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.270 [2024-12-13 09:37:34.417691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.270 qpair failed and we were unable to recover it. 00:26:22.270 [2024-12-13 09:37:34.427632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.270 [2024-12-13 09:37:34.427687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.270 [2024-12-13 09:37:34.427701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.270 [2024-12-13 09:37:34.427707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.270 [2024-12-13 09:37:34.427713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.270 [2024-12-13 09:37:34.427727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.270 qpair failed and we were unable to recover it. 00:26:22.270 [2024-12-13 09:37:34.437675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.270 [2024-12-13 09:37:34.437732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.270 [2024-12-13 09:37:34.437746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.270 [2024-12-13 09:37:34.437752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.270 [2024-12-13 09:37:34.437758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.270 [2024-12-13 09:37:34.437772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.270 qpair failed and we were unable to recover it. 00:26:22.270 [2024-12-13 09:37:34.447678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.270 [2024-12-13 09:37:34.447731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.270 [2024-12-13 09:37:34.447745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.270 [2024-12-13 09:37:34.447751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.270 [2024-12-13 09:37:34.447757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.270 [2024-12-13 09:37:34.447771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.270 qpair failed and we were unable to recover it. 00:26:22.270 [2024-12-13 09:37:34.457725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.270 [2024-12-13 09:37:34.457792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.270 [2024-12-13 09:37:34.457805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.270 [2024-12-13 09:37:34.457812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.270 [2024-12-13 09:37:34.457818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.270 [2024-12-13 09:37:34.457831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.270 qpair failed and we were unable to recover it. 00:26:22.271 [2024-12-13 09:37:34.467741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.271 [2024-12-13 09:37:34.467796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.271 [2024-12-13 09:37:34.467810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.271 [2024-12-13 09:37:34.467819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.271 [2024-12-13 09:37:34.467825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.271 [2024-12-13 09:37:34.467839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.271 qpair failed and we were unable to recover it. 00:26:22.271 [2024-12-13 09:37:34.477764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.271 [2024-12-13 09:37:34.477821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.271 [2024-12-13 09:37:34.477835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.271 [2024-12-13 09:37:34.477842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.271 [2024-12-13 09:37:34.477848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.271 [2024-12-13 09:37:34.477864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.271 qpair failed and we were unable to recover it. 00:26:22.271 [2024-12-13 09:37:34.487767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.271 [2024-12-13 09:37:34.487824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.271 [2024-12-13 09:37:34.487838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.271 [2024-12-13 09:37:34.487844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.271 [2024-12-13 09:37:34.487850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.271 [2024-12-13 09:37:34.487865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.271 qpair failed and we were unable to recover it. 00:26:22.271 [2024-12-13 09:37:34.497838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.271 [2024-12-13 09:37:34.497893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.271 [2024-12-13 09:37:34.497906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.271 [2024-12-13 09:37:34.497913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.271 [2024-12-13 09:37:34.497919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.271 [2024-12-13 09:37:34.497932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.271 qpair failed and we were unable to recover it. 00:26:22.271 [2024-12-13 09:37:34.507893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.271 [2024-12-13 09:37:34.507980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.271 [2024-12-13 09:37:34.507994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.271 [2024-12-13 09:37:34.508001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.271 [2024-12-13 09:37:34.508007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.271 [2024-12-13 09:37:34.508024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.271 qpair failed and we were unable to recover it. 00:26:22.271 [2024-12-13 09:37:34.517905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.271 [2024-12-13 09:37:34.517967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.271 [2024-12-13 09:37:34.517981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.271 [2024-12-13 09:37:34.517987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.271 [2024-12-13 09:37:34.517993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.271 [2024-12-13 09:37:34.518008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.271 qpair failed and we were unable to recover it. 00:26:22.271 [2024-12-13 09:37:34.527918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.271 [2024-12-13 09:37:34.527971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.271 [2024-12-13 09:37:34.527984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.271 [2024-12-13 09:37:34.527991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.271 [2024-12-13 09:37:34.527997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.271 [2024-12-13 09:37:34.528011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.271 qpair failed and we were unable to recover it. 00:26:22.271 [2024-12-13 09:37:34.537988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.271 [2024-12-13 09:37:34.538044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.271 [2024-12-13 09:37:34.538057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.271 [2024-12-13 09:37:34.538063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.271 [2024-12-13 09:37:34.538069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.271 [2024-12-13 09:37:34.538083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.271 qpair failed and we were unable to recover it. 00:26:22.271 [2024-12-13 09:37:34.547970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.271 [2024-12-13 09:37:34.548023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.271 [2024-12-13 09:37:34.548037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.271 [2024-12-13 09:37:34.548044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.271 [2024-12-13 09:37:34.548050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.271 [2024-12-13 09:37:34.548063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.271 qpair failed and we were unable to recover it. 00:26:22.271 [2024-12-13 09:37:34.558023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.271 [2024-12-13 09:37:34.558100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.271 [2024-12-13 09:37:34.558113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.271 [2024-12-13 09:37:34.558119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.271 [2024-12-13 09:37:34.558125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.271 [2024-12-13 09:37:34.558139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.271 qpair failed and we were unable to recover it. 00:26:22.271 [2024-12-13 09:37:34.568041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.271 [2024-12-13 09:37:34.568099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.271 [2024-12-13 09:37:34.568112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.271 [2024-12-13 09:37:34.568118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.271 [2024-12-13 09:37:34.568124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.271 [2024-12-13 09:37:34.568138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.271 qpair failed and we were unable to recover it. 00:26:22.271 [2024-12-13 09:37:34.578072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.271 [2024-12-13 09:37:34.578127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.271 [2024-12-13 09:37:34.578140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.271 [2024-12-13 09:37:34.578147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.271 [2024-12-13 09:37:34.578152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.271 [2024-12-13 09:37:34.578167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.271 qpair failed and we were unable to recover it. 00:26:22.271 [2024-12-13 09:37:34.588104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.271 [2024-12-13 09:37:34.588155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.271 [2024-12-13 09:37:34.588172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.271 [2024-12-13 09:37:34.588179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.271 [2024-12-13 09:37:34.588185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.271 [2024-12-13 09:37:34.588200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.271 qpair failed and we were unable to recover it. 00:26:22.271 [2024-12-13 09:37:34.598113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.271 [2024-12-13 09:37:34.598163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.272 [2024-12-13 09:37:34.598178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.272 [2024-12-13 09:37:34.598190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.272 [2024-12-13 09:37:34.598196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.272 [2024-12-13 09:37:34.598210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.272 qpair failed and we were unable to recover it. 00:26:22.272 [2024-12-13 09:37:34.608141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.272 [2024-12-13 09:37:34.608199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.272 [2024-12-13 09:37:34.608212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.272 [2024-12-13 09:37:34.608219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.272 [2024-12-13 09:37:34.608225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.272 [2024-12-13 09:37:34.608239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.272 qpair failed and we were unable to recover it. 00:26:22.272 [2024-12-13 09:37:34.618176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.272 [2024-12-13 09:37:34.618233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.272 [2024-12-13 09:37:34.618246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.272 [2024-12-13 09:37:34.618252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.272 [2024-12-13 09:37:34.618258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.272 [2024-12-13 09:37:34.618271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.272 qpair failed and we were unable to recover it. 00:26:22.272 [2024-12-13 09:37:34.628186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.272 [2024-12-13 09:37:34.628242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.272 [2024-12-13 09:37:34.628255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.272 [2024-12-13 09:37:34.628262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.272 [2024-12-13 09:37:34.628267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.272 [2024-12-13 09:37:34.628281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.272 qpair failed and we were unable to recover it. 00:26:22.532 [2024-12-13 09:37:34.638225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.532 [2024-12-13 09:37:34.638284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.532 [2024-12-13 09:37:34.638301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.532 [2024-12-13 09:37:34.638308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.532 [2024-12-13 09:37:34.638314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.532 [2024-12-13 09:37:34.638333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.532 qpair failed and we were unable to recover it. 00:26:22.532 [2024-12-13 09:37:34.648260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.532 [2024-12-13 09:37:34.648310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.532 [2024-12-13 09:37:34.648326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.532 [2024-12-13 09:37:34.648333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.532 [2024-12-13 09:37:34.648339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.532 [2024-12-13 09:37:34.648354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.532 qpair failed and we were unable to recover it. 00:26:22.532 [2024-12-13 09:37:34.658318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.532 [2024-12-13 09:37:34.658373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.532 [2024-12-13 09:37:34.658388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.532 [2024-12-13 09:37:34.658394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.532 [2024-12-13 09:37:34.658400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.532 [2024-12-13 09:37:34.658415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.532 qpair failed and we were unable to recover it. 00:26:22.532 [2024-12-13 09:37:34.668302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.532 [2024-12-13 09:37:34.668358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.532 [2024-12-13 09:37:34.668372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.532 [2024-12-13 09:37:34.668379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.532 [2024-12-13 09:37:34.668384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.532 [2024-12-13 09:37:34.668398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.532 qpair failed and we were unable to recover it. 00:26:22.532 [2024-12-13 09:37:34.678312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.532 [2024-12-13 09:37:34.678368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.532 [2024-12-13 09:37:34.678382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.532 [2024-12-13 09:37:34.678388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.532 [2024-12-13 09:37:34.678394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.532 [2024-12-13 09:37:34.678409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.532 qpair failed and we were unable to recover it. 00:26:22.532 [2024-12-13 09:37:34.688350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.532 [2024-12-13 09:37:34.688405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.532 [2024-12-13 09:37:34.688418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.532 [2024-12-13 09:37:34.688424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.532 [2024-12-13 09:37:34.688430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.532 [2024-12-13 09:37:34.688444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.532 qpair failed and we were unable to recover it. 00:26:22.532 [2024-12-13 09:37:34.698394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.532 [2024-12-13 09:37:34.698452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.532 [2024-12-13 09:37:34.698466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.532 [2024-12-13 09:37:34.698473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.532 [2024-12-13 09:37:34.698479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.532 [2024-12-13 09:37:34.698492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.532 qpair failed and we were unable to recover it. 00:26:22.532 [2024-12-13 09:37:34.708411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.532 [2024-12-13 09:37:34.708468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.532 [2024-12-13 09:37:34.708482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.532 [2024-12-13 09:37:34.708488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.532 [2024-12-13 09:37:34.708494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.532 [2024-12-13 09:37:34.708509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.532 qpair failed and we were unable to recover it. 00:26:22.532 [2024-12-13 09:37:34.718432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.532 [2024-12-13 09:37:34.718490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.532 [2024-12-13 09:37:34.718503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.532 [2024-12-13 09:37:34.718510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.532 [2024-12-13 09:37:34.718515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.532 [2024-12-13 09:37:34.718529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.532 qpair failed and we were unable to recover it. 00:26:22.532 [2024-12-13 09:37:34.728469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.532 [2024-12-13 09:37:34.728525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.532 [2024-12-13 09:37:34.728539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.532 [2024-12-13 09:37:34.728548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.532 [2024-12-13 09:37:34.728554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.532 [2024-12-13 09:37:34.728568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.532 qpair failed and we were unable to recover it. 00:26:22.532 [2024-12-13 09:37:34.738519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.532 [2024-12-13 09:37:34.738578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.532 [2024-12-13 09:37:34.738591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.532 [2024-12-13 09:37:34.738597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.532 [2024-12-13 09:37:34.738603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.532 [2024-12-13 09:37:34.738617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.532 qpair failed and we were unable to recover it. 00:26:22.533 [2024-12-13 09:37:34.748516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.533 [2024-12-13 09:37:34.748576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.533 [2024-12-13 09:37:34.748590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.533 [2024-12-13 09:37:34.748596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.533 [2024-12-13 09:37:34.748602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.533 [2024-12-13 09:37:34.748616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.533 qpair failed and we were unable to recover it. 00:26:22.533 [2024-12-13 09:37:34.758541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.533 [2024-12-13 09:37:34.758598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.533 [2024-12-13 09:37:34.758611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.533 [2024-12-13 09:37:34.758617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.533 [2024-12-13 09:37:34.758623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.533 [2024-12-13 09:37:34.758636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.533 qpair failed and we were unable to recover it. 00:26:22.533 [2024-12-13 09:37:34.768572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.533 [2024-12-13 09:37:34.768625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.533 [2024-12-13 09:37:34.768638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.533 [2024-12-13 09:37:34.768644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.533 [2024-12-13 09:37:34.768650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.533 [2024-12-13 09:37:34.768667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.533 qpair failed and we were unable to recover it. 00:26:22.533 [2024-12-13 09:37:34.778628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.533 [2024-12-13 09:37:34.778708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.533 [2024-12-13 09:37:34.778722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.533 [2024-12-13 09:37:34.778729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.533 [2024-12-13 09:37:34.778735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.533 [2024-12-13 09:37:34.778749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.533 qpair failed and we were unable to recover it. 00:26:22.533 [2024-12-13 09:37:34.788640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.533 [2024-12-13 09:37:34.788695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.533 [2024-12-13 09:37:34.788708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.533 [2024-12-13 09:37:34.788715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.533 [2024-12-13 09:37:34.788720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.533 [2024-12-13 09:37:34.788734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.533 qpair failed and we were unable to recover it. 00:26:22.533 [2024-12-13 09:37:34.798643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.533 [2024-12-13 09:37:34.798700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.533 [2024-12-13 09:37:34.798714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.533 [2024-12-13 09:37:34.798720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.533 [2024-12-13 09:37:34.798726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.533 [2024-12-13 09:37:34.798740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.533 qpair failed and we were unable to recover it. 00:26:22.533 [2024-12-13 09:37:34.808711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.533 [2024-12-13 09:37:34.808767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.533 [2024-12-13 09:37:34.808780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.533 [2024-12-13 09:37:34.808787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.533 [2024-12-13 09:37:34.808792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.533 [2024-12-13 09:37:34.808807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.533 qpair failed and we were unable to recover it. 00:26:22.533 [2024-12-13 09:37:34.818681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.533 [2024-12-13 09:37:34.818743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.533 [2024-12-13 09:37:34.818756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.533 [2024-12-13 09:37:34.818763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.533 [2024-12-13 09:37:34.818768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.533 [2024-12-13 09:37:34.818782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.533 qpair failed and we were unable to recover it. 00:26:22.533 [2024-12-13 09:37:34.828725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.533 [2024-12-13 09:37:34.828781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.533 [2024-12-13 09:37:34.828794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.533 [2024-12-13 09:37:34.828801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.533 [2024-12-13 09:37:34.828807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.533 [2024-12-13 09:37:34.828821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.533 qpair failed and we were unable to recover it. 00:26:22.533 [2024-12-13 09:37:34.838774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.533 [2024-12-13 09:37:34.838829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.533 [2024-12-13 09:37:34.838842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.533 [2024-12-13 09:37:34.838849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.533 [2024-12-13 09:37:34.838855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.533 [2024-12-13 09:37:34.838868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.533 qpair failed and we were unable to recover it. 00:26:22.533 [2024-12-13 09:37:34.848805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.533 [2024-12-13 09:37:34.848856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.533 [2024-12-13 09:37:34.848869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.533 [2024-12-13 09:37:34.848875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.533 [2024-12-13 09:37:34.848881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.533 [2024-12-13 09:37:34.848894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.533 qpair failed and we were unable to recover it. 00:26:22.533 [2024-12-13 09:37:34.858864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.533 [2024-12-13 09:37:34.858940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.533 [2024-12-13 09:37:34.858953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.533 [2024-12-13 09:37:34.858963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.533 [2024-12-13 09:37:34.858969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.533 [2024-12-13 09:37:34.858984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.533 qpair failed and we were unable to recover it. 00:26:22.533 [2024-12-13 09:37:34.868875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.533 [2024-12-13 09:37:34.868934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.533 [2024-12-13 09:37:34.868947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.533 [2024-12-13 09:37:34.868953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.533 [2024-12-13 09:37:34.868959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.533 [2024-12-13 09:37:34.868972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.533 qpair failed and we were unable to recover it. 00:26:22.533 [2024-12-13 09:37:34.878899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.533 [2024-12-13 09:37:34.878952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.534 [2024-12-13 09:37:34.878966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.534 [2024-12-13 09:37:34.878972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.534 [2024-12-13 09:37:34.878978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.534 [2024-12-13 09:37:34.878992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.534 qpair failed and we were unable to recover it. 00:26:22.534 [2024-12-13 09:37:34.888919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.534 [2024-12-13 09:37:34.888971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.534 [2024-12-13 09:37:34.888985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.534 [2024-12-13 09:37:34.888991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.534 [2024-12-13 09:37:34.888997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.534 [2024-12-13 09:37:34.889010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.534 qpair failed and we were unable to recover it. 00:26:22.809 [2024-12-13 09:37:34.898958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.809 [2024-12-13 09:37:34.899014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.810 [2024-12-13 09:37:34.899030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.810 [2024-12-13 09:37:34.899036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.810 [2024-12-13 09:37:34.899042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.810 [2024-12-13 09:37:34.899061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.810 qpair failed and we were unable to recover it. 00:26:22.810 [2024-12-13 09:37:34.908926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.810 [2024-12-13 09:37:34.908988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.810 [2024-12-13 09:37:34.909003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.810 [2024-12-13 09:37:34.909009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.810 [2024-12-13 09:37:34.909015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.810 [2024-12-13 09:37:34.909030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.810 qpair failed and we were unable to recover it. 00:26:22.810 [2024-12-13 09:37:34.919013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.810 [2024-12-13 09:37:34.919099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.810 [2024-12-13 09:37:34.919115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.810 [2024-12-13 09:37:34.919122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.810 [2024-12-13 09:37:34.919128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.810 [2024-12-13 09:37:34.919144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.810 qpair failed and we were unable to recover it. 00:26:22.810 [2024-12-13 09:37:34.929017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.810 [2024-12-13 09:37:34.929096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.810 [2024-12-13 09:37:34.929112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.810 [2024-12-13 09:37:34.929119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.810 [2024-12-13 09:37:34.929125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.810 [2024-12-13 09:37:34.929140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.810 qpair failed and we were unable to recover it. 00:26:22.810 [2024-12-13 09:37:34.939075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.810 [2024-12-13 09:37:34.939132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.810 [2024-12-13 09:37:34.939146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.810 [2024-12-13 09:37:34.939152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.810 [2024-12-13 09:37:34.939158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.810 [2024-12-13 09:37:34.939172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.810 qpair failed and we were unable to recover it. 00:26:22.810 [2024-12-13 09:37:34.949084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.810 [2024-12-13 09:37:34.949141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.810 [2024-12-13 09:37:34.949155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.810 [2024-12-13 09:37:34.949162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.810 [2024-12-13 09:37:34.949167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.810 [2024-12-13 09:37:34.949182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.810 qpair failed and we were unable to recover it. 00:26:22.810 [2024-12-13 09:37:34.959127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.810 [2024-12-13 09:37:34.959181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.810 [2024-12-13 09:37:34.959194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.810 [2024-12-13 09:37:34.959201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.810 [2024-12-13 09:37:34.959207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.810 [2024-12-13 09:37:34.959221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.810 qpair failed and we were unable to recover it. 00:26:22.810 [2024-12-13 09:37:34.969156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.810 [2024-12-13 09:37:34.969209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.810 [2024-12-13 09:37:34.969222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.810 [2024-12-13 09:37:34.969228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.810 [2024-12-13 09:37:34.969234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.810 [2024-12-13 09:37:34.969248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.810 qpair failed and we were unable to recover it. 00:26:22.810 [2024-12-13 09:37:34.979163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.810 [2024-12-13 09:37:34.979222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.810 [2024-12-13 09:37:34.979235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.810 [2024-12-13 09:37:34.979241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.810 [2024-12-13 09:37:34.979247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.810 [2024-12-13 09:37:34.979261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.810 qpair failed and we were unable to recover it. 00:26:22.810 [2024-12-13 09:37:34.989209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.810 [2024-12-13 09:37:34.989266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.810 [2024-12-13 09:37:34.989280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.810 [2024-12-13 09:37:34.989289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.810 [2024-12-13 09:37:34.989294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.810 [2024-12-13 09:37:34.989308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.810 qpair failed and we were unable to recover it. 00:26:22.810 [2024-12-13 09:37:34.999230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.810 [2024-12-13 09:37:34.999288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.810 [2024-12-13 09:37:34.999301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.810 [2024-12-13 09:37:34.999308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.810 [2024-12-13 09:37:34.999314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.810 [2024-12-13 09:37:34.999327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.810 qpair failed and we were unable to recover it. 00:26:22.810 [2024-12-13 09:37:35.009283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.810 [2024-12-13 09:37:35.009337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.810 [2024-12-13 09:37:35.009350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.810 [2024-12-13 09:37:35.009357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.810 [2024-12-13 09:37:35.009362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.810 [2024-12-13 09:37:35.009376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.810 qpair failed and we were unable to recover it. 00:26:22.810 [2024-12-13 09:37:35.019310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.810 [2024-12-13 09:37:35.019369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.810 [2024-12-13 09:37:35.019384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.810 [2024-12-13 09:37:35.019390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.810 [2024-12-13 09:37:35.019396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.810 [2024-12-13 09:37:35.019409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.810 qpair failed and we were unable to recover it. 00:26:22.810 [2024-12-13 09:37:35.029325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.810 [2024-12-13 09:37:35.029381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.810 [2024-12-13 09:37:35.029394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.810 [2024-12-13 09:37:35.029401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.811 [2024-12-13 09:37:35.029407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.811 [2024-12-13 09:37:35.029424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.811 qpair failed and we were unable to recover it. 00:26:22.811 [2024-12-13 09:37:35.039384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.811 [2024-12-13 09:37:35.039441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.811 [2024-12-13 09:37:35.039459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.811 [2024-12-13 09:37:35.039466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.811 [2024-12-13 09:37:35.039472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.811 [2024-12-13 09:37:35.039486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.811 qpair failed and we were unable to recover it. 00:26:22.811 [2024-12-13 09:37:35.049384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.811 [2024-12-13 09:37:35.049437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.811 [2024-12-13 09:37:35.049456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.811 [2024-12-13 09:37:35.049463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.811 [2024-12-13 09:37:35.049469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.811 [2024-12-13 09:37:35.049483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.811 qpair failed and we were unable to recover it. 00:26:22.811 [2024-12-13 09:37:35.059402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.811 [2024-12-13 09:37:35.059466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.811 [2024-12-13 09:37:35.059479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.811 [2024-12-13 09:37:35.059485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.811 [2024-12-13 09:37:35.059491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.811 [2024-12-13 09:37:35.059505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.811 qpair failed and we were unable to recover it. 00:26:22.811 [2024-12-13 09:37:35.069444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.811 [2024-12-13 09:37:35.069503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.811 [2024-12-13 09:37:35.069517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.811 [2024-12-13 09:37:35.069523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.811 [2024-12-13 09:37:35.069529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.811 [2024-12-13 09:37:35.069543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.811 qpair failed and we were unable to recover it. 00:26:22.811 [2024-12-13 09:37:35.079515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.811 [2024-12-13 09:37:35.079573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.811 [2024-12-13 09:37:35.079587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.811 [2024-12-13 09:37:35.079594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.811 [2024-12-13 09:37:35.079600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.811 [2024-12-13 09:37:35.079614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.811 qpair failed and we were unable to recover it. 00:26:22.811 [2024-12-13 09:37:35.089485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.811 [2024-12-13 09:37:35.089543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.811 [2024-12-13 09:37:35.089556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.811 [2024-12-13 09:37:35.089563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.811 [2024-12-13 09:37:35.089569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.811 [2024-12-13 09:37:35.089584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.811 qpair failed and we were unable to recover it. 00:26:22.811 [2024-12-13 09:37:35.099543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.811 [2024-12-13 09:37:35.099600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.811 [2024-12-13 09:37:35.099613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.811 [2024-12-13 09:37:35.099620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.811 [2024-12-13 09:37:35.099626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.811 [2024-12-13 09:37:35.099639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.811 qpair failed and we were unable to recover it. 00:26:22.811 [2024-12-13 09:37:35.109571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.811 [2024-12-13 09:37:35.109656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.811 [2024-12-13 09:37:35.109671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.811 [2024-12-13 09:37:35.109678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.811 [2024-12-13 09:37:35.109684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.811 [2024-12-13 09:37:35.109698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.811 qpair failed and we were unable to recover it. 00:26:22.811 [2024-12-13 09:37:35.119600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.811 [2024-12-13 09:37:35.119657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.811 [2024-12-13 09:37:35.119671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.811 [2024-12-13 09:37:35.119680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.811 [2024-12-13 09:37:35.119686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.811 [2024-12-13 09:37:35.119700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.811 qpair failed and we were unable to recover it. 00:26:22.811 [2024-12-13 09:37:35.129563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.811 [2024-12-13 09:37:35.129615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.811 [2024-12-13 09:37:35.129627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.811 [2024-12-13 09:37:35.129634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.811 [2024-12-13 09:37:35.129639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.811 [2024-12-13 09:37:35.129653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.811 qpair failed and we were unable to recover it. 00:26:22.811 [2024-12-13 09:37:35.139667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.811 [2024-12-13 09:37:35.139724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.811 [2024-12-13 09:37:35.139737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.811 [2024-12-13 09:37:35.139744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.811 [2024-12-13 09:37:35.139749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.811 [2024-12-13 09:37:35.139763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.811 qpair failed and we were unable to recover it. 00:26:22.811 [2024-12-13 09:37:35.149672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.811 [2024-12-13 09:37:35.149728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.811 [2024-12-13 09:37:35.149742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.811 [2024-12-13 09:37:35.149748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.811 [2024-12-13 09:37:35.149754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.811 [2024-12-13 09:37:35.149768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.811 qpair failed and we were unable to recover it. 00:26:22.811 [2024-12-13 09:37:35.159694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.811 [2024-12-13 09:37:35.159750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.811 [2024-12-13 09:37:35.159763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.811 [2024-12-13 09:37:35.159770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.811 [2024-12-13 09:37:35.159775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.811 [2024-12-13 09:37:35.159792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.811 qpair failed and we were unable to recover it. 00:26:22.811 [2024-12-13 09:37:35.169717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.812 [2024-12-13 09:37:35.169772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.812 [2024-12-13 09:37:35.169785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.812 [2024-12-13 09:37:35.169792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.812 [2024-12-13 09:37:35.169797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:22.812 [2024-12-13 09:37:35.169811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.812 qpair failed and we were unable to recover it. 00:26:23.071 [2024-12-13 09:37:35.179770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.071 [2024-12-13 09:37:35.179829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.071 [2024-12-13 09:37:35.179846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.071 [2024-12-13 09:37:35.179853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.071 [2024-12-13 09:37:35.179859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.071 [2024-12-13 09:37:35.179875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-12-13 09:37:35.189779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.071 [2024-12-13 09:37:35.189833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.071 [2024-12-13 09:37:35.189849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.071 [2024-12-13 09:37:35.189855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.071 [2024-12-13 09:37:35.189861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.071 [2024-12-13 09:37:35.189877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-12-13 09:37:35.199804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.071 [2024-12-13 09:37:35.199859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.071 [2024-12-13 09:37:35.199872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.071 [2024-12-13 09:37:35.199878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.071 [2024-12-13 09:37:35.199884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.071 [2024-12-13 09:37:35.199899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-12-13 09:37:35.209803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.071 [2024-12-13 09:37:35.209862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.071 [2024-12-13 09:37:35.209876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.071 [2024-12-13 09:37:35.209882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.071 [2024-12-13 09:37:35.209888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.071 [2024-12-13 09:37:35.209901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-12-13 09:37:35.219869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.071 [2024-12-13 09:37:35.219927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.071 [2024-12-13 09:37:35.219941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.071 [2024-12-13 09:37:35.219948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.071 [2024-12-13 09:37:35.219954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.071 [2024-12-13 09:37:35.219968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-12-13 09:37:35.229844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.071 [2024-12-13 09:37:35.229912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.071 [2024-12-13 09:37:35.229925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.071 [2024-12-13 09:37:35.229931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.071 [2024-12-13 09:37:35.229937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.071 [2024-12-13 09:37:35.229951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-12-13 09:37:35.239871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.071 [2024-12-13 09:37:35.239926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.071 [2024-12-13 09:37:35.239940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.071 [2024-12-13 09:37:35.239947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.071 [2024-12-13 09:37:35.239952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.071 [2024-12-13 09:37:35.239967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-12-13 09:37:35.249967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.071 [2024-12-13 09:37:35.250024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.071 [2024-12-13 09:37:35.250036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.071 [2024-12-13 09:37:35.250046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.071 [2024-12-13 09:37:35.250052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.071 [2024-12-13 09:37:35.250066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-12-13 09:37:35.259993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.071 [2024-12-13 09:37:35.260052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.071 [2024-12-13 09:37:35.260066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.071 [2024-12-13 09:37:35.260072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.071 [2024-12-13 09:37:35.260078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.071 [2024-12-13 09:37:35.260091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-12-13 09:37:35.270041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.071 [2024-12-13 09:37:35.270109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.071 [2024-12-13 09:37:35.270123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.071 [2024-12-13 09:37:35.270130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.071 [2024-12-13 09:37:35.270136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.071 [2024-12-13 09:37:35.270151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-12-13 09:37:35.280054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.071 [2024-12-13 09:37:35.280111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.071 [2024-12-13 09:37:35.280124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.071 [2024-12-13 09:37:35.280131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.071 [2024-12-13 09:37:35.280137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.071 [2024-12-13 09:37:35.280149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.071 qpair failed and we were unable to recover it. 00:26:23.071 [2024-12-13 09:37:35.290076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.071 [2024-12-13 09:37:35.290131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.071 [2024-12-13 09:37:35.290143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.071 [2024-12-13 09:37:35.290150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.071 [2024-12-13 09:37:35.290155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.072 [2024-12-13 09:37:35.290172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-12-13 09:37:35.300107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.072 [2024-12-13 09:37:35.300164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.072 [2024-12-13 09:37:35.300178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.072 [2024-12-13 09:37:35.300184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.072 [2024-12-13 09:37:35.300189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.072 [2024-12-13 09:37:35.300203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-12-13 09:37:35.310164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.072 [2024-12-13 09:37:35.310217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.072 [2024-12-13 09:37:35.310230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.072 [2024-12-13 09:37:35.310237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.072 [2024-12-13 09:37:35.310242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.072 [2024-12-13 09:37:35.310256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-12-13 09:37:35.320250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.072 [2024-12-13 09:37:35.320306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.072 [2024-12-13 09:37:35.320319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.072 [2024-12-13 09:37:35.320325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.072 [2024-12-13 09:37:35.320331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.072 [2024-12-13 09:37:35.320345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-12-13 09:37:35.330198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.072 [2024-12-13 09:37:35.330252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.072 [2024-12-13 09:37:35.330265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.072 [2024-12-13 09:37:35.330272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.072 [2024-12-13 09:37:35.330278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.072 [2024-12-13 09:37:35.330291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-12-13 09:37:35.340257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.072 [2024-12-13 09:37:35.340361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.072 [2024-12-13 09:37:35.340375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.072 [2024-12-13 09:37:35.340381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.072 [2024-12-13 09:37:35.340387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.072 [2024-12-13 09:37:35.340402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-12-13 09:37:35.350251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.072 [2024-12-13 09:37:35.350309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.072 [2024-12-13 09:37:35.350323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.072 [2024-12-13 09:37:35.350330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.072 [2024-12-13 09:37:35.350335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.072 [2024-12-13 09:37:35.350349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-12-13 09:37:35.360273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.072 [2024-12-13 09:37:35.360334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.072 [2024-12-13 09:37:35.360347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.072 [2024-12-13 09:37:35.360353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.072 [2024-12-13 09:37:35.360359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.072 [2024-12-13 09:37:35.360373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-12-13 09:37:35.370291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.072 [2024-12-13 09:37:35.370391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.072 [2024-12-13 09:37:35.370405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.072 [2024-12-13 09:37:35.370412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.072 [2024-12-13 09:37:35.370418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.072 [2024-12-13 09:37:35.370432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-12-13 09:37:35.380307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.072 [2024-12-13 09:37:35.380382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.072 [2024-12-13 09:37:35.380395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.072 [2024-12-13 09:37:35.380409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.072 [2024-12-13 09:37:35.380415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.072 [2024-12-13 09:37:35.380429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-12-13 09:37:35.390395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.072 [2024-12-13 09:37:35.390496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.072 [2024-12-13 09:37:35.390510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.072 [2024-12-13 09:37:35.390517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.072 [2024-12-13 09:37:35.390523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.072 [2024-12-13 09:37:35.390537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-12-13 09:37:35.400404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.072 [2024-12-13 09:37:35.400475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.072 [2024-12-13 09:37:35.400489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.072 [2024-12-13 09:37:35.400495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.072 [2024-12-13 09:37:35.400501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.072 [2024-12-13 09:37:35.400516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-12-13 09:37:35.410413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.072 [2024-12-13 09:37:35.410470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.072 [2024-12-13 09:37:35.410483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.072 [2024-12-13 09:37:35.410489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.072 [2024-12-13 09:37:35.410495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.072 [2024-12-13 09:37:35.410509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-12-13 09:37:35.420458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.072 [2024-12-13 09:37:35.420516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.072 [2024-12-13 09:37:35.420529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.072 [2024-12-13 09:37:35.420536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.072 [2024-12-13 09:37:35.420541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.072 [2024-12-13 09:37:35.420558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.072 qpair failed and we were unable to recover it. 00:26:23.072 [2024-12-13 09:37:35.430482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.072 [2024-12-13 09:37:35.430538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.073 [2024-12-13 09:37:35.430553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.073 [2024-12-13 09:37:35.430559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.073 [2024-12-13 09:37:35.430565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.073 [2024-12-13 09:37:35.430580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.073 qpair failed and we were unable to recover it. 00:26:23.332 [2024-12-13 09:37:35.440539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.332 [2024-12-13 09:37:35.440592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.332 [2024-12-13 09:37:35.440609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.332 [2024-12-13 09:37:35.440616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.332 [2024-12-13 09:37:35.440622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.332 [2024-12-13 09:37:35.440638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.332 qpair failed and we were unable to recover it. 00:26:23.332 [2024-12-13 09:37:35.450545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.332 [2024-12-13 09:37:35.450602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.332 [2024-12-13 09:37:35.450618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.332 [2024-12-13 09:37:35.450624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.332 [2024-12-13 09:37:35.450630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.332 [2024-12-13 09:37:35.450646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.332 qpair failed and we were unable to recover it. 00:26:23.332 [2024-12-13 09:37:35.460583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.332 [2024-12-13 09:37:35.460645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.332 [2024-12-13 09:37:35.460659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.332 [2024-12-13 09:37:35.460665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.332 [2024-12-13 09:37:35.460671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.332 [2024-12-13 09:37:35.460686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.332 qpair failed and we were unable to recover it. 00:26:23.332 [2024-12-13 09:37:35.470619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.332 [2024-12-13 09:37:35.470690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.332 [2024-12-13 09:37:35.470704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.332 [2024-12-13 09:37:35.470710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.332 [2024-12-13 09:37:35.470716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.332 [2024-12-13 09:37:35.470731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.332 qpair failed and we were unable to recover it. 00:26:23.332 [2024-12-13 09:37:35.480545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.332 [2024-12-13 09:37:35.480601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.332 [2024-12-13 09:37:35.480615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.332 [2024-12-13 09:37:35.480622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.332 [2024-12-13 09:37:35.480628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.332 [2024-12-13 09:37:35.480643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.332 qpair failed and we were unable to recover it. 00:26:23.332 [2024-12-13 09:37:35.490689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.333 [2024-12-13 09:37:35.490742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.333 [2024-12-13 09:37:35.490755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.333 [2024-12-13 09:37:35.490762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.333 [2024-12-13 09:37:35.490768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.333 [2024-12-13 09:37:35.490782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.333 qpair failed and we were unable to recover it. 00:26:23.333 [2024-12-13 09:37:35.500676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.333 [2024-12-13 09:37:35.500746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.333 [2024-12-13 09:37:35.500760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.333 [2024-12-13 09:37:35.500766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.333 [2024-12-13 09:37:35.500772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.333 [2024-12-13 09:37:35.500786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.333 qpair failed and we were unable to recover it. 00:26:23.333 [2024-12-13 09:37:35.510693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.333 [2024-12-13 09:37:35.510774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.333 [2024-12-13 09:37:35.510788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.333 [2024-12-13 09:37:35.510798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.333 [2024-12-13 09:37:35.510804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.333 [2024-12-13 09:37:35.510819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.333 qpair failed and we were unable to recover it. 00:26:23.333 [2024-12-13 09:37:35.520736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.333 [2024-12-13 09:37:35.520792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.333 [2024-12-13 09:37:35.520805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.333 [2024-12-13 09:37:35.520812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.333 [2024-12-13 09:37:35.520817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.333 [2024-12-13 09:37:35.520832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.333 qpair failed and we were unable to recover it. 00:26:23.333 [2024-12-13 09:37:35.530743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.333 [2024-12-13 09:37:35.530804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.333 [2024-12-13 09:37:35.530817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.333 [2024-12-13 09:37:35.530823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.333 [2024-12-13 09:37:35.530829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.333 [2024-12-13 09:37:35.530843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.333 qpair failed and we were unable to recover it. 00:26:23.333 [2024-12-13 09:37:35.540782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.333 [2024-12-13 09:37:35.540844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.333 [2024-12-13 09:37:35.540857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.333 [2024-12-13 09:37:35.540863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.333 [2024-12-13 09:37:35.540869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.333 [2024-12-13 09:37:35.540884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.333 qpair failed and we were unable to recover it. 00:26:23.333 [2024-12-13 09:37:35.550812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.333 [2024-12-13 09:37:35.550886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.333 [2024-12-13 09:37:35.550899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.333 [2024-12-13 09:37:35.550906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.333 [2024-12-13 09:37:35.550912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.333 [2024-12-13 09:37:35.550929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.333 qpair failed and we were unable to recover it. 00:26:23.333 [2024-12-13 09:37:35.560825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.333 [2024-12-13 09:37:35.560885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.333 [2024-12-13 09:37:35.560898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.333 [2024-12-13 09:37:35.560904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.333 [2024-12-13 09:37:35.560910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.333 [2024-12-13 09:37:35.560923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.333 qpair failed and we were unable to recover it. 00:26:23.333 [2024-12-13 09:37:35.570877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.333 [2024-12-13 09:37:35.570935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.333 [2024-12-13 09:37:35.570948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.333 [2024-12-13 09:37:35.570954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.333 [2024-12-13 09:37:35.570960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.333 [2024-12-13 09:37:35.570974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.333 qpair failed and we were unable to recover it. 00:26:23.333 [2024-12-13 09:37:35.580891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.333 [2024-12-13 09:37:35.580957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.333 [2024-12-13 09:37:35.580970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.333 [2024-12-13 09:37:35.580976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.333 [2024-12-13 09:37:35.580982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.333 [2024-12-13 09:37:35.580996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.333 qpair failed and we were unable to recover it. 00:26:23.333 [2024-12-13 09:37:35.590916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.333 [2024-12-13 09:37:35.590976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.333 [2024-12-13 09:37:35.590993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.333 [2024-12-13 09:37:35.591001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.333 [2024-12-13 09:37:35.591009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.333 [2024-12-13 09:37:35.591025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.333 qpair failed and we were unable to recover it. 00:26:23.333 [2024-12-13 09:37:35.600939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.333 [2024-12-13 09:37:35.601015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.333 [2024-12-13 09:37:35.601029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.333 [2024-12-13 09:37:35.601036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.333 [2024-12-13 09:37:35.601042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.333 [2024-12-13 09:37:35.601058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.333 qpair failed and we were unable to recover it. 00:26:23.333 [2024-12-13 09:37:35.610970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.333 [2024-12-13 09:37:35.611029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.333 [2024-12-13 09:37:35.611042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.333 [2024-12-13 09:37:35.611049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.333 [2024-12-13 09:37:35.611054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.333 [2024-12-13 09:37:35.611070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.333 qpair failed and we were unable to recover it. 00:26:23.333 [2024-12-13 09:37:35.621038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.333 [2024-12-13 09:37:35.621097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.333 [2024-12-13 09:37:35.621111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.333 [2024-12-13 09:37:35.621118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.334 [2024-12-13 09:37:35.621124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.334 [2024-12-13 09:37:35.621138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.334 qpair failed and we were unable to recover it. 00:26:23.334 [2024-12-13 09:37:35.630981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.334 [2024-12-13 09:37:35.631079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.334 [2024-12-13 09:37:35.631094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.334 [2024-12-13 09:37:35.631101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.334 [2024-12-13 09:37:35.631106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.334 [2024-12-13 09:37:35.631122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.334 qpair failed and we were unable to recover it. 00:26:23.334 [2024-12-13 09:37:35.640978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.334 [2024-12-13 09:37:35.641037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.334 [2024-12-13 09:37:35.641050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.334 [2024-12-13 09:37:35.641059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.334 [2024-12-13 09:37:35.641065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.334 [2024-12-13 09:37:35.641078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.334 qpair failed and we were unable to recover it. 00:26:23.334 [2024-12-13 09:37:35.651080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.334 [2024-12-13 09:37:35.651134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.334 [2024-12-13 09:37:35.651148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.334 [2024-12-13 09:37:35.651154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.334 [2024-12-13 09:37:35.651160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.334 [2024-12-13 09:37:35.651174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.334 qpair failed and we were unable to recover it. 00:26:23.334 [2024-12-13 09:37:35.661118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.334 [2024-12-13 09:37:35.661173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.334 [2024-12-13 09:37:35.661186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.334 [2024-12-13 09:37:35.661192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.334 [2024-12-13 09:37:35.661198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.334 [2024-12-13 09:37:35.661212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.334 qpair failed and we were unable to recover it. 00:26:23.334 [2024-12-13 09:37:35.671169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.334 [2024-12-13 09:37:35.671244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.334 [2024-12-13 09:37:35.671276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.334 [2024-12-13 09:37:35.671282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.334 [2024-12-13 09:37:35.671288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.334 [2024-12-13 09:37:35.671305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.334 qpair failed and we were unable to recover it. 00:26:23.334 [2024-12-13 09:37:35.681171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.334 [2024-12-13 09:37:35.681229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.334 [2024-12-13 09:37:35.681243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.334 [2024-12-13 09:37:35.681249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.334 [2024-12-13 09:37:35.681255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.334 [2024-12-13 09:37:35.681272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.334 qpair failed and we were unable to recover it. 00:26:23.334 [2024-12-13 09:37:35.691195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.334 [2024-12-13 09:37:35.691254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.334 [2024-12-13 09:37:35.691267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.334 [2024-12-13 09:37:35.691273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.334 [2024-12-13 09:37:35.691279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.334 [2024-12-13 09:37:35.691293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.334 qpair failed and we were unable to recover it. 00:26:23.594 [2024-12-13 09:37:35.701222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.594 [2024-12-13 09:37:35.701307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.594 [2024-12-13 09:37:35.701325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.594 [2024-12-13 09:37:35.701332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.594 [2024-12-13 09:37:35.701339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.594 [2024-12-13 09:37:35.701355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.594 qpair failed and we were unable to recover it. 00:26:23.594 [2024-12-13 09:37:35.711265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.594 [2024-12-13 09:37:35.711325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.594 [2024-12-13 09:37:35.711341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.594 [2024-12-13 09:37:35.711348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.594 [2024-12-13 09:37:35.711354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.594 [2024-12-13 09:37:35.711370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.594 qpair failed and we were unable to recover it. 00:26:23.594 [2024-12-13 09:37:35.721293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.594 [2024-12-13 09:37:35.721349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.594 [2024-12-13 09:37:35.721363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.594 [2024-12-13 09:37:35.721370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.594 [2024-12-13 09:37:35.721375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.594 [2024-12-13 09:37:35.721390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.594 qpair failed and we were unable to recover it. 00:26:23.594 [2024-12-13 09:37:35.731300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.594 [2024-12-13 09:37:35.731360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.594 [2024-12-13 09:37:35.731373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.594 [2024-12-13 09:37:35.731380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.594 [2024-12-13 09:37:35.731385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.594 [2024-12-13 09:37:35.731399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.594 qpair failed and we were unable to recover it. 00:26:23.594 [2024-12-13 09:37:35.741278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.594 [2024-12-13 09:37:35.741336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.594 [2024-12-13 09:37:35.741349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.594 [2024-12-13 09:37:35.741356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.594 [2024-12-13 09:37:35.741362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.594 [2024-12-13 09:37:35.741376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.594 qpair failed and we were unable to recover it. 00:26:23.594 [2024-12-13 09:37:35.751421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.594 [2024-12-13 09:37:35.751486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.594 [2024-12-13 09:37:35.751499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.594 [2024-12-13 09:37:35.751506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.594 [2024-12-13 09:37:35.751513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.594 [2024-12-13 09:37:35.751527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.594 qpair failed and we were unable to recover it. 00:26:23.594 [2024-12-13 09:37:35.761396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.594 [2024-12-13 09:37:35.761454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.594 [2024-12-13 09:37:35.761469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.594 [2024-12-13 09:37:35.761475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.594 [2024-12-13 09:37:35.761481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.594 [2024-12-13 09:37:35.761495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.594 qpair failed and we were unable to recover it. 00:26:23.594 [2024-12-13 09:37:35.771430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.594 [2024-12-13 09:37:35.771489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.594 [2024-12-13 09:37:35.771503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.594 [2024-12-13 09:37:35.771512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.595 [2024-12-13 09:37:35.771518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.595 [2024-12-13 09:37:35.771533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.595 qpair failed and we were unable to recover it. 00:26:23.595 [2024-12-13 09:37:35.781482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.595 [2024-12-13 09:37:35.781546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.595 [2024-12-13 09:37:35.781560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.595 [2024-12-13 09:37:35.781567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.595 [2024-12-13 09:37:35.781573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.595 [2024-12-13 09:37:35.781588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.595 qpair failed and we were unable to recover it. 00:26:23.595 [2024-12-13 09:37:35.791615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.595 [2024-12-13 09:37:35.791805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.595 [2024-12-13 09:37:35.791820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.595 [2024-12-13 09:37:35.791827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.595 [2024-12-13 09:37:35.791833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.595 [2024-12-13 09:37:35.791848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.595 qpair failed and we were unable to recover it. 00:26:23.595 [2024-12-13 09:37:35.801562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.595 [2024-12-13 09:37:35.801641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.595 [2024-12-13 09:37:35.801655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.595 [2024-12-13 09:37:35.801662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.595 [2024-12-13 09:37:35.801668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.595 [2024-12-13 09:37:35.801683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.595 qpair failed and we were unable to recover it. 00:26:23.595 [2024-12-13 09:37:35.811580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.595 [2024-12-13 09:37:35.811635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.595 [2024-12-13 09:37:35.811648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.595 [2024-12-13 09:37:35.811655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.595 [2024-12-13 09:37:35.811661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.595 [2024-12-13 09:37:35.811678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.595 qpair failed and we were unable to recover it. 00:26:23.595 [2024-12-13 09:37:35.821566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.595 [2024-12-13 09:37:35.821626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.595 [2024-12-13 09:37:35.821639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.595 [2024-12-13 09:37:35.821646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.595 [2024-12-13 09:37:35.821652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.595 [2024-12-13 09:37:35.821665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.595 qpair failed and we were unable to recover it. 00:26:23.595 [2024-12-13 09:37:35.831617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.595 [2024-12-13 09:37:35.831679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.595 [2024-12-13 09:37:35.831692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.595 [2024-12-13 09:37:35.831698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.595 [2024-12-13 09:37:35.831704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.595 [2024-12-13 09:37:35.831718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.595 qpair failed and we were unable to recover it. 00:26:23.595 [2024-12-13 09:37:35.841612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.595 [2024-12-13 09:37:35.841678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.595 [2024-12-13 09:37:35.841692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.595 [2024-12-13 09:37:35.841698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.595 [2024-12-13 09:37:35.841705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.595 [2024-12-13 09:37:35.841719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.595 qpair failed and we were unable to recover it. 00:26:23.595 [2024-12-13 09:37:35.851625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.595 [2024-12-13 09:37:35.851681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.595 [2024-12-13 09:37:35.851694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.595 [2024-12-13 09:37:35.851701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.595 [2024-12-13 09:37:35.851706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.595 [2024-12-13 09:37:35.851721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.595 qpair failed and we were unable to recover it. 00:26:23.595 [2024-12-13 09:37:35.861701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.595 [2024-12-13 09:37:35.861784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.595 [2024-12-13 09:37:35.861798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.595 [2024-12-13 09:37:35.861804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.595 [2024-12-13 09:37:35.861810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.595 [2024-12-13 09:37:35.861825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.595 qpair failed and we were unable to recover it. 00:26:23.595 [2024-12-13 09:37:35.871712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.595 [2024-12-13 09:37:35.871769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.595 [2024-12-13 09:37:35.871782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.595 [2024-12-13 09:37:35.871789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.595 [2024-12-13 09:37:35.871794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.595 [2024-12-13 09:37:35.871809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.595 qpair failed and we were unable to recover it. 00:26:23.595 [2024-12-13 09:37:35.881734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.595 [2024-12-13 09:37:35.881791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.595 [2024-12-13 09:37:35.881805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.595 [2024-12-13 09:37:35.881812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.595 [2024-12-13 09:37:35.881818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.595 [2024-12-13 09:37:35.881832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.595 qpair failed and we were unable to recover it. 00:26:23.595 [2024-12-13 09:37:35.891766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.595 [2024-12-13 09:37:35.891820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.595 [2024-12-13 09:37:35.891833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.595 [2024-12-13 09:37:35.891840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.595 [2024-12-13 09:37:35.891846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.595 [2024-12-13 09:37:35.891860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.595 qpair failed and we were unable to recover it. 00:26:23.595 [2024-12-13 09:37:35.901803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.595 [2024-12-13 09:37:35.901861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.595 [2024-12-13 09:37:35.901874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.595 [2024-12-13 09:37:35.901884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.595 [2024-12-13 09:37:35.901889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.595 [2024-12-13 09:37:35.901903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.595 qpair failed and we were unable to recover it. 00:26:23.595 [2024-12-13 09:37:35.911821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.596 [2024-12-13 09:37:35.911881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.596 [2024-12-13 09:37:35.911894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.596 [2024-12-13 09:37:35.911901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.596 [2024-12-13 09:37:35.911906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.596 [2024-12-13 09:37:35.911921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.596 qpair failed and we were unable to recover it. 00:26:23.596 [2024-12-13 09:37:35.921847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.596 [2024-12-13 09:37:35.921901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.596 [2024-12-13 09:37:35.921914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.596 [2024-12-13 09:37:35.921921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.596 [2024-12-13 09:37:35.921926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.596 [2024-12-13 09:37:35.921940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.596 qpair failed and we were unable to recover it. 00:26:23.596 [2024-12-13 09:37:35.931866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.596 [2024-12-13 09:37:35.931927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.596 [2024-12-13 09:37:35.931941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.596 [2024-12-13 09:37:35.931948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.596 [2024-12-13 09:37:35.931953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.596 [2024-12-13 09:37:35.931967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.596 qpair failed and we were unable to recover it. 00:26:23.596 [2024-12-13 09:37:35.941912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.596 [2024-12-13 09:37:35.941973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.596 [2024-12-13 09:37:35.941987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.596 [2024-12-13 09:37:35.941993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.596 [2024-12-13 09:37:35.941999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.596 [2024-12-13 09:37:35.942016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.596 qpair failed and we were unable to recover it. 00:26:23.596 [2024-12-13 09:37:35.951950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.596 [2024-12-13 09:37:35.952005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.596 [2024-12-13 09:37:35.952018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.596 [2024-12-13 09:37:35.952024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.596 [2024-12-13 09:37:35.952030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.596 [2024-12-13 09:37:35.952044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.596 qpair failed and we were unable to recover it. 00:26:23.856 [2024-12-13 09:37:35.961954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.856 [2024-12-13 09:37:35.962012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.856 [2024-12-13 09:37:35.962029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.856 [2024-12-13 09:37:35.962036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.856 [2024-12-13 09:37:35.962041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.856 [2024-12-13 09:37:35.962057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.856 qpair failed and we were unable to recover it. 00:26:23.856 [2024-12-13 09:37:35.971985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.856 [2024-12-13 09:37:35.972037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.856 [2024-12-13 09:37:35.972052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.856 [2024-12-13 09:37:35.972059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.856 [2024-12-13 09:37:35.972065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.856 [2024-12-13 09:37:35.972080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.856 qpair failed and we were unable to recover it. 00:26:23.856 [2024-12-13 09:37:35.982059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.856 [2024-12-13 09:37:35.982113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.856 [2024-12-13 09:37:35.982127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.856 [2024-12-13 09:37:35.982133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.856 [2024-12-13 09:37:35.982139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.856 [2024-12-13 09:37:35.982154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.856 qpair failed and we were unable to recover it. 00:26:23.856 [2024-12-13 09:37:35.992093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.856 [2024-12-13 09:37:35.992153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.856 [2024-12-13 09:37:35.992167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.856 [2024-12-13 09:37:35.992174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.856 [2024-12-13 09:37:35.992179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.856 [2024-12-13 09:37:35.992194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.856 qpair failed and we were unable to recover it. 00:26:23.856 [2024-12-13 09:37:36.002060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.856 [2024-12-13 09:37:36.002116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.856 [2024-12-13 09:37:36.002130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.856 [2024-12-13 09:37:36.002136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.856 [2024-12-13 09:37:36.002142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.856 [2024-12-13 09:37:36.002156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.856 qpair failed and we were unable to recover it. 00:26:23.856 [2024-12-13 09:37:36.012094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.856 [2024-12-13 09:37:36.012147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.856 [2024-12-13 09:37:36.012161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.856 [2024-12-13 09:37:36.012168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.856 [2024-12-13 09:37:36.012173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.856 [2024-12-13 09:37:36.012188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.856 qpair failed and we were unable to recover it. 00:26:23.856 [2024-12-13 09:37:36.022132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.856 [2024-12-13 09:37:36.022189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.856 [2024-12-13 09:37:36.022203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.856 [2024-12-13 09:37:36.022210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.856 [2024-12-13 09:37:36.022215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.856 [2024-12-13 09:37:36.022230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.856 qpair failed and we were unable to recover it. 00:26:23.856 [2024-12-13 09:37:36.032158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.856 [2024-12-13 09:37:36.032217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.856 [2024-12-13 09:37:36.032231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.856 [2024-12-13 09:37:36.032241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.856 [2024-12-13 09:37:36.032247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.856 [2024-12-13 09:37:36.032261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.856 qpair failed and we were unable to recover it. 00:26:23.856 [2024-12-13 09:37:36.042169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.856 [2024-12-13 09:37:36.042221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.856 [2024-12-13 09:37:36.042234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.856 [2024-12-13 09:37:36.042241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.856 [2024-12-13 09:37:36.042247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.856 [2024-12-13 09:37:36.042260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.856 qpair failed and we were unable to recover it. 00:26:23.856 [2024-12-13 09:37:36.052208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.856 [2024-12-13 09:37:36.052260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.856 [2024-12-13 09:37:36.052274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.856 [2024-12-13 09:37:36.052280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.856 [2024-12-13 09:37:36.052286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.856 [2024-12-13 09:37:36.052300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.856 qpair failed and we were unable to recover it. 00:26:23.856 [2024-12-13 09:37:36.062253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.856 [2024-12-13 09:37:36.062316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.856 [2024-12-13 09:37:36.062329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.856 [2024-12-13 09:37:36.062335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.856 [2024-12-13 09:37:36.062340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.856 [2024-12-13 09:37:36.062355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.856 qpair failed and we were unable to recover it. 00:26:23.856 [2024-12-13 09:37:36.072261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.856 [2024-12-13 09:37:36.072315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.856 [2024-12-13 09:37:36.072329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.856 [2024-12-13 09:37:36.072335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.856 [2024-12-13 09:37:36.072341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.856 [2024-12-13 09:37:36.072358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.856 qpair failed and we were unable to recover it. 00:26:23.856 [2024-12-13 09:37:36.082281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.856 [2024-12-13 09:37:36.082334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.856 [2024-12-13 09:37:36.082349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.856 [2024-12-13 09:37:36.082355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.856 [2024-12-13 09:37:36.082361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.857 [2024-12-13 09:37:36.082376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.857 qpair failed and we were unable to recover it. 00:26:23.857 [2024-12-13 09:37:36.092323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.857 [2024-12-13 09:37:36.092379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.857 [2024-12-13 09:37:36.092392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.857 [2024-12-13 09:37:36.092399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.857 [2024-12-13 09:37:36.092404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.857 [2024-12-13 09:37:36.092418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.857 qpair failed and we were unable to recover it. 00:26:23.857 [2024-12-13 09:37:36.102404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.857 [2024-12-13 09:37:36.102510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.857 [2024-12-13 09:37:36.102524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.857 [2024-12-13 09:37:36.102531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.857 [2024-12-13 09:37:36.102537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.857 [2024-12-13 09:37:36.102552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.857 qpair failed and we were unable to recover it. 00:26:23.857 [2024-12-13 09:37:36.112408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.857 [2024-12-13 09:37:36.112463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.857 [2024-12-13 09:37:36.112476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.857 [2024-12-13 09:37:36.112483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.857 [2024-12-13 09:37:36.112488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.857 [2024-12-13 09:37:36.112503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.857 qpair failed and we were unable to recover it. 00:26:23.857 [2024-12-13 09:37:36.122413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.857 [2024-12-13 09:37:36.122469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.857 [2024-12-13 09:37:36.122483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.857 [2024-12-13 09:37:36.122489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.857 [2024-12-13 09:37:36.122495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.857 [2024-12-13 09:37:36.122509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.857 qpair failed and we were unable to recover it. 00:26:23.857 [2024-12-13 09:37:36.132428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.857 [2024-12-13 09:37:36.132495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.857 [2024-12-13 09:37:36.132508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.857 [2024-12-13 09:37:36.132515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.857 [2024-12-13 09:37:36.132521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.857 [2024-12-13 09:37:36.132535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.857 qpair failed and we were unable to recover it. 00:26:23.857 [2024-12-13 09:37:36.142481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.857 [2024-12-13 09:37:36.142537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.857 [2024-12-13 09:37:36.142551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.857 [2024-12-13 09:37:36.142557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.857 [2024-12-13 09:37:36.142563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.857 [2024-12-13 09:37:36.142577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.857 qpair failed and we were unable to recover it. 00:26:23.857 [2024-12-13 09:37:36.152545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.857 [2024-12-13 09:37:36.152605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.857 [2024-12-13 09:37:36.152618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.857 [2024-12-13 09:37:36.152624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.857 [2024-12-13 09:37:36.152630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.857 [2024-12-13 09:37:36.152645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.857 qpair failed and we were unable to recover it. 00:26:23.857 [2024-12-13 09:37:36.162572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.857 [2024-12-13 09:37:36.162633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.857 [2024-12-13 09:37:36.162646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.857 [2024-12-13 09:37:36.162655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.857 [2024-12-13 09:37:36.162661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.857 [2024-12-13 09:37:36.162675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.857 qpair failed and we were unable to recover it. 00:26:23.857 [2024-12-13 09:37:36.172546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.857 [2024-12-13 09:37:36.172602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.857 [2024-12-13 09:37:36.172615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.857 [2024-12-13 09:37:36.172622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.857 [2024-12-13 09:37:36.172628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.857 [2024-12-13 09:37:36.172642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.857 qpair failed and we were unable to recover it. 00:26:23.857 [2024-12-13 09:37:36.182589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.857 [2024-12-13 09:37:36.182646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.857 [2024-12-13 09:37:36.182661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.857 [2024-12-13 09:37:36.182668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.857 [2024-12-13 09:37:36.182674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.857 [2024-12-13 09:37:36.182688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.857 qpair failed and we were unable to recover it. 00:26:23.857 [2024-12-13 09:37:36.192608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.857 [2024-12-13 09:37:36.192660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.857 [2024-12-13 09:37:36.192673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.857 [2024-12-13 09:37:36.192680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.857 [2024-12-13 09:37:36.192685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.857 [2024-12-13 09:37:36.192699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.857 qpair failed and we were unable to recover it. 00:26:23.857 [2024-12-13 09:37:36.202633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.857 [2024-12-13 09:37:36.202687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.857 [2024-12-13 09:37:36.202700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.857 [2024-12-13 09:37:36.202707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.857 [2024-12-13 09:37:36.202712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.857 [2024-12-13 09:37:36.202729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.857 qpair failed and we were unable to recover it. 00:26:23.857 [2024-12-13 09:37:36.212658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.857 [2024-12-13 09:37:36.212710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.857 [2024-12-13 09:37:36.212723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.857 [2024-12-13 09:37:36.212730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.857 [2024-12-13 09:37:36.212735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:23.857 [2024-12-13 09:37:36.212749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.857 qpair failed and we were unable to recover it. 00:26:24.118 [2024-12-13 09:37:36.222711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.118 [2024-12-13 09:37:36.222778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.118 [2024-12-13 09:37:36.222795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.118 [2024-12-13 09:37:36.222802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.118 [2024-12-13 09:37:36.222808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.118 [2024-12-13 09:37:36.222823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.118 qpair failed and we were unable to recover it. 00:26:24.118 [2024-12-13 09:37:36.232703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.118 [2024-12-13 09:37:36.232775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.118 [2024-12-13 09:37:36.232791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.118 [2024-12-13 09:37:36.232798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.118 [2024-12-13 09:37:36.232804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.118 [2024-12-13 09:37:36.232819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.118 qpair failed and we were unable to recover it. 00:26:24.118 [2024-12-13 09:37:36.242756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.118 [2024-12-13 09:37:36.242810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.118 [2024-12-13 09:37:36.242824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.118 [2024-12-13 09:37:36.242831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.118 [2024-12-13 09:37:36.242837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.118 [2024-12-13 09:37:36.242851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.118 qpair failed and we were unable to recover it. 00:26:24.118 [2024-12-13 09:37:36.252805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.118 [2024-12-13 09:37:36.252882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.118 [2024-12-13 09:37:36.252896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.118 [2024-12-13 09:37:36.252903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.118 [2024-12-13 09:37:36.252909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.118 [2024-12-13 09:37:36.252923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.118 qpair failed and we were unable to recover it. 00:26:24.118 [2024-12-13 09:37:36.262815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.118 [2024-12-13 09:37:36.262870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.118 [2024-12-13 09:37:36.262883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.118 [2024-12-13 09:37:36.262889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.118 [2024-12-13 09:37:36.262895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.118 [2024-12-13 09:37:36.262909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.118 qpair failed and we were unable to recover it. 00:26:24.118 [2024-12-13 09:37:36.272778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.118 [2024-12-13 09:37:36.272838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.119 [2024-12-13 09:37:36.272852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.119 [2024-12-13 09:37:36.272858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.119 [2024-12-13 09:37:36.272864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.119 [2024-12-13 09:37:36.272878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.119 qpair failed and we were unable to recover it. 00:26:24.119 [2024-12-13 09:37:36.282862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.119 [2024-12-13 09:37:36.282916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.119 [2024-12-13 09:37:36.282930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.119 [2024-12-13 09:37:36.282936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.119 [2024-12-13 09:37:36.282942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.119 [2024-12-13 09:37:36.282957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.119 qpair failed and we were unable to recover it. 00:26:24.119 [2024-12-13 09:37:36.292887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.119 [2024-12-13 09:37:36.292939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.119 [2024-12-13 09:37:36.292952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.119 [2024-12-13 09:37:36.292962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.119 [2024-12-13 09:37:36.292968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.119 [2024-12-13 09:37:36.292982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.119 qpair failed and we were unable to recover it. 00:26:24.119 [2024-12-13 09:37:36.302935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.119 [2024-12-13 09:37:36.302992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.119 [2024-12-13 09:37:36.303005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.119 [2024-12-13 09:37:36.303012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.119 [2024-12-13 09:37:36.303017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.119 [2024-12-13 09:37:36.303031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.119 qpair failed and we were unable to recover it. 00:26:24.119 [2024-12-13 09:37:36.312959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.119 [2024-12-13 09:37:36.313013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.119 [2024-12-13 09:37:36.313026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.119 [2024-12-13 09:37:36.313032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.119 [2024-12-13 09:37:36.313038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.119 [2024-12-13 09:37:36.313052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.119 qpair failed and we were unable to recover it. 00:26:24.119 [2024-12-13 09:37:36.322982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.119 [2024-12-13 09:37:36.323035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.119 [2024-12-13 09:37:36.323048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.119 [2024-12-13 09:37:36.323055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.119 [2024-12-13 09:37:36.323060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.119 [2024-12-13 09:37:36.323074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.119 qpair failed and we were unable to recover it. 00:26:24.119 [2024-12-13 09:37:36.333004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.119 [2024-12-13 09:37:36.333061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.119 [2024-12-13 09:37:36.333074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.119 [2024-12-13 09:37:36.333081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.119 [2024-12-13 09:37:36.333087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.119 [2024-12-13 09:37:36.333106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.119 qpair failed and we were unable to recover it. 00:26:24.119 [2024-12-13 09:37:36.343047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.119 [2024-12-13 09:37:36.343103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.119 [2024-12-13 09:37:36.343116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.119 [2024-12-13 09:37:36.343123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.119 [2024-12-13 09:37:36.343128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.119 [2024-12-13 09:37:36.343142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.119 qpair failed and we were unable to recover it. 00:26:24.119 [2024-12-13 09:37:36.353054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.119 [2024-12-13 09:37:36.353155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.119 [2024-12-13 09:37:36.353169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.119 [2024-12-13 09:37:36.353175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.119 [2024-12-13 09:37:36.353181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.119 [2024-12-13 09:37:36.353195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.119 qpair failed and we were unable to recover it. 00:26:24.119 [2024-12-13 09:37:36.363105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.119 [2024-12-13 09:37:36.363160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.119 [2024-12-13 09:37:36.363173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.119 [2024-12-13 09:37:36.363180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.119 [2024-12-13 09:37:36.363185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.119 [2024-12-13 09:37:36.363199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.119 qpair failed and we were unable to recover it. 00:26:24.119 [2024-12-13 09:37:36.373120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.119 [2024-12-13 09:37:36.373177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.119 [2024-12-13 09:37:36.373191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.119 [2024-12-13 09:37:36.373197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.119 [2024-12-13 09:37:36.373203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.119 [2024-12-13 09:37:36.373216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.119 qpair failed and we were unable to recover it. 00:26:24.119 [2024-12-13 09:37:36.383157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.119 [2024-12-13 09:37:36.383216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.119 [2024-12-13 09:37:36.383230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.119 [2024-12-13 09:37:36.383236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.119 [2024-12-13 09:37:36.383242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.119 [2024-12-13 09:37:36.383256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.119 qpair failed and we were unable to recover it. 00:26:24.119 [2024-12-13 09:37:36.393167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.119 [2024-12-13 09:37:36.393225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.119 [2024-12-13 09:37:36.393239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.120 [2024-12-13 09:37:36.393245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.120 [2024-12-13 09:37:36.393251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.120 [2024-12-13 09:37:36.393265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.120 qpair failed and we were unable to recover it. 00:26:24.120 [2024-12-13 09:37:36.403174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.120 [2024-12-13 09:37:36.403223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.120 [2024-12-13 09:37:36.403237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.120 [2024-12-13 09:37:36.403244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.120 [2024-12-13 09:37:36.403249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.120 [2024-12-13 09:37:36.403263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.120 qpair failed and we were unable to recover it. 00:26:24.120 [2024-12-13 09:37:36.413241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.120 [2024-12-13 09:37:36.413299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.120 [2024-12-13 09:37:36.413313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.120 [2024-12-13 09:37:36.413319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.120 [2024-12-13 09:37:36.413324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.120 [2024-12-13 09:37:36.413338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.120 qpair failed and we were unable to recover it. 00:26:24.120 [2024-12-13 09:37:36.423263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.120 [2024-12-13 09:37:36.423319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.120 [2024-12-13 09:37:36.423332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.120 [2024-12-13 09:37:36.423342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.120 [2024-12-13 09:37:36.423347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.120 [2024-12-13 09:37:36.423362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.120 qpair failed and we were unable to recover it. 00:26:24.120 [2024-12-13 09:37:36.433288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.120 [2024-12-13 09:37:36.433343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.120 [2024-12-13 09:37:36.433356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.120 [2024-12-13 09:37:36.433362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.120 [2024-12-13 09:37:36.433368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.120 [2024-12-13 09:37:36.433382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.120 qpair failed and we were unable to recover it. 00:26:24.120 [2024-12-13 09:37:36.443317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.120 [2024-12-13 09:37:36.443368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.120 [2024-12-13 09:37:36.443381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.120 [2024-12-13 09:37:36.443388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.120 [2024-12-13 09:37:36.443394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.120 [2024-12-13 09:37:36.443408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.120 qpair failed and we were unable to recover it. 00:26:24.120 [2024-12-13 09:37:36.453341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.120 [2024-12-13 09:37:36.453403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.120 [2024-12-13 09:37:36.453417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.120 [2024-12-13 09:37:36.453423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.120 [2024-12-13 09:37:36.453429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.120 [2024-12-13 09:37:36.453443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.120 qpair failed and we were unable to recover it. 00:26:24.120 [2024-12-13 09:37:36.463382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.120 [2024-12-13 09:37:36.463438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.120 [2024-12-13 09:37:36.463455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.120 [2024-12-13 09:37:36.463462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.120 [2024-12-13 09:37:36.463467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.120 [2024-12-13 09:37:36.463485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.120 qpair failed and we were unable to recover it. 00:26:24.120 [2024-12-13 09:37:36.473444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.120 [2024-12-13 09:37:36.473504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.120 [2024-12-13 09:37:36.473518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.120 [2024-12-13 09:37:36.473525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.120 [2024-12-13 09:37:36.473530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.120 [2024-12-13 09:37:36.473545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.120 qpair failed and we were unable to recover it. 00:26:24.120 [2024-12-13 09:37:36.483425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.120 [2024-12-13 09:37:36.483489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.120 [2024-12-13 09:37:36.483506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.120 [2024-12-13 09:37:36.483513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.120 [2024-12-13 09:37:36.483519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.120 [2024-12-13 09:37:36.483536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.120 qpair failed and we were unable to recover it. 00:26:24.381 [2024-12-13 09:37:36.493445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.381 [2024-12-13 09:37:36.493506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.381 [2024-12-13 09:37:36.493523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.381 [2024-12-13 09:37:36.493530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.381 [2024-12-13 09:37:36.493535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.381 [2024-12-13 09:37:36.493551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.381 qpair failed and we were unable to recover it. 00:26:24.381 [2024-12-13 09:37:36.503512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.381 [2024-12-13 09:37:36.503573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.381 [2024-12-13 09:37:36.503586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.381 [2024-12-13 09:37:36.503593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.381 [2024-12-13 09:37:36.503598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.381 [2024-12-13 09:37:36.503613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.381 qpair failed and we were unable to recover it. 00:26:24.381 [2024-12-13 09:37:36.513527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.381 [2024-12-13 09:37:36.513587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.381 [2024-12-13 09:37:36.513602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.381 [2024-12-13 09:37:36.513610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.381 [2024-12-13 09:37:36.513615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.381 [2024-12-13 09:37:36.513630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.381 qpair failed and we were unable to recover it. 00:26:24.381 [2024-12-13 09:37:36.523591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.381 [2024-12-13 09:37:36.523646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.381 [2024-12-13 09:37:36.523659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.381 [2024-12-13 09:37:36.523666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.381 [2024-12-13 09:37:36.523672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.381 [2024-12-13 09:37:36.523686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.381 qpair failed and we were unable to recover it. 00:26:24.381 [2024-12-13 09:37:36.533515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.381 [2024-12-13 09:37:36.533571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.381 [2024-12-13 09:37:36.533585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.381 [2024-12-13 09:37:36.533591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.381 [2024-12-13 09:37:36.533597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.381 [2024-12-13 09:37:36.533611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.381 qpair failed and we were unable to recover it. 00:26:24.381 [2024-12-13 09:37:36.543553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.381 [2024-12-13 09:37:36.543624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.381 [2024-12-13 09:37:36.543638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.381 [2024-12-13 09:37:36.543644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.381 [2024-12-13 09:37:36.543651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.381 [2024-12-13 09:37:36.543665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.381 qpair failed and we were unable to recover it. 00:26:24.381 [2024-12-13 09:37:36.553624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.381 [2024-12-13 09:37:36.553682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.381 [2024-12-13 09:37:36.553695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.381 [2024-12-13 09:37:36.553704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.381 [2024-12-13 09:37:36.553710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.381 [2024-12-13 09:37:36.553724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.381 qpair failed and we were unable to recover it. 00:26:24.381 [2024-12-13 09:37:36.563654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.381 [2024-12-13 09:37:36.563709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.381 [2024-12-13 09:37:36.563722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.381 [2024-12-13 09:37:36.563729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.381 [2024-12-13 09:37:36.563735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.381 [2024-12-13 09:37:36.563748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.382 qpair failed and we were unable to recover it. 00:26:24.382 [2024-12-13 09:37:36.573649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.382 [2024-12-13 09:37:36.573707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.382 [2024-12-13 09:37:36.573721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.382 [2024-12-13 09:37:36.573728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.382 [2024-12-13 09:37:36.573733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.382 [2024-12-13 09:37:36.573748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.382 qpair failed and we were unable to recover it. 00:26:24.382 [2024-12-13 09:37:36.583768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.382 [2024-12-13 09:37:36.583846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.382 [2024-12-13 09:37:36.583862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.382 [2024-12-13 09:37:36.583870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.382 [2024-12-13 09:37:36.583876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.382 [2024-12-13 09:37:36.583891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.382 qpair failed and we were unable to recover it. 00:26:24.382 [2024-12-13 09:37:36.593685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.382 [2024-12-13 09:37:36.593740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.382 [2024-12-13 09:37:36.593754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.382 [2024-12-13 09:37:36.593760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.382 [2024-12-13 09:37:36.593766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.382 [2024-12-13 09:37:36.593784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.382 qpair failed and we were unable to recover it. 00:26:24.382 [2024-12-13 09:37:36.603769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.382 [2024-12-13 09:37:36.603819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.382 [2024-12-13 09:37:36.603833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.382 [2024-12-13 09:37:36.603840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.382 [2024-12-13 09:37:36.603846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.382 [2024-12-13 09:37:36.603861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.382 qpair failed and we were unable to recover it. 00:26:24.382 [2024-12-13 09:37:36.613742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.382 [2024-12-13 09:37:36.613799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.382 [2024-12-13 09:37:36.613812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.382 [2024-12-13 09:37:36.613818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.382 [2024-12-13 09:37:36.613824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.382 [2024-12-13 09:37:36.613839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.382 qpair failed and we were unable to recover it. 00:26:24.382 [2024-12-13 09:37:36.623871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.382 [2024-12-13 09:37:36.623927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.382 [2024-12-13 09:37:36.623940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.382 [2024-12-13 09:37:36.623946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.382 [2024-12-13 09:37:36.623952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.382 [2024-12-13 09:37:36.623966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.382 qpair failed and we were unable to recover it. 00:26:24.382 [2024-12-13 09:37:36.633870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.382 [2024-12-13 09:37:36.633929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.382 [2024-12-13 09:37:36.633943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.382 [2024-12-13 09:37:36.633949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.382 [2024-12-13 09:37:36.633955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.382 [2024-12-13 09:37:36.633968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.382 qpair failed and we were unable to recover it. 00:26:24.382 [2024-12-13 09:37:36.643874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.382 [2024-12-13 09:37:36.643939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.382 [2024-12-13 09:37:36.643952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.382 [2024-12-13 09:37:36.643959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.382 [2024-12-13 09:37:36.643965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.382 [2024-12-13 09:37:36.643978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.382 qpair failed and we were unable to recover it. 00:26:24.382 [2024-12-13 09:37:36.653917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.382 [2024-12-13 09:37:36.653971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.382 [2024-12-13 09:37:36.653985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.382 [2024-12-13 09:37:36.653991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.382 [2024-12-13 09:37:36.653998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.382 [2024-12-13 09:37:36.654011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.382 qpair failed and we were unable to recover it. 00:26:24.382 [2024-12-13 09:37:36.663916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.382 [2024-12-13 09:37:36.663972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.382 [2024-12-13 09:37:36.663986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.382 [2024-12-13 09:37:36.663993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.382 [2024-12-13 09:37:36.663999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.382 [2024-12-13 09:37:36.664013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.382 qpair failed and we were unable to recover it. 00:26:24.382 [2024-12-13 09:37:36.674025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.382 [2024-12-13 09:37:36.674083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.382 [2024-12-13 09:37:36.674097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.382 [2024-12-13 09:37:36.674104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.382 [2024-12-13 09:37:36.674110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.382 [2024-12-13 09:37:36.674124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.382 qpair failed and we were unable to recover it. 00:26:24.383 [2024-12-13 09:37:36.684003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.383 [2024-12-13 09:37:36.684055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.383 [2024-12-13 09:37:36.684069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.383 [2024-12-13 09:37:36.684079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.383 [2024-12-13 09:37:36.684085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.383 [2024-12-13 09:37:36.684099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.383 qpair failed and we were unable to recover it. 00:26:24.383 [2024-12-13 09:37:36.694005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.383 [2024-12-13 09:37:36.694055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.383 [2024-12-13 09:37:36.694069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.383 [2024-12-13 09:37:36.694076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.383 [2024-12-13 09:37:36.694082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.383 [2024-12-13 09:37:36.694095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.383 qpair failed and we were unable to recover it. 00:26:24.383 [2024-12-13 09:37:36.704076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.383 [2024-12-13 09:37:36.704136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.383 [2024-12-13 09:37:36.704150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.383 [2024-12-13 09:37:36.704157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.383 [2024-12-13 09:37:36.704163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.383 [2024-12-13 09:37:36.704178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.383 qpair failed and we were unable to recover it. 00:26:24.383 [2024-12-13 09:37:36.714051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.383 [2024-12-13 09:37:36.714132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.383 [2024-12-13 09:37:36.714146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.383 [2024-12-13 09:37:36.714153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.383 [2024-12-13 09:37:36.714159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.383 [2024-12-13 09:37:36.714173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.383 qpair failed and we were unable to recover it. 00:26:24.383 [2024-12-13 09:37:36.724138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.383 [2024-12-13 09:37:36.724196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.383 [2024-12-13 09:37:36.724209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.383 [2024-12-13 09:37:36.724215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.383 [2024-12-13 09:37:36.724221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.383 [2024-12-13 09:37:36.724238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.383 qpair failed and we were unable to recover it. 00:26:24.383 [2024-12-13 09:37:36.734087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.383 [2024-12-13 09:37:36.734143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.383 [2024-12-13 09:37:36.734156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.383 [2024-12-13 09:37:36.734162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.383 [2024-12-13 09:37:36.734167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.383 [2024-12-13 09:37:36.734181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.383 qpair failed and we were unable to recover it. 00:26:24.383 [2024-12-13 09:37:36.744193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.383 [2024-12-13 09:37:36.744255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.383 [2024-12-13 09:37:36.744271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.383 [2024-12-13 09:37:36.744278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.383 [2024-12-13 09:37:36.744284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.383 [2024-12-13 09:37:36.744300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.383 qpair failed and we were unable to recover it. 00:26:24.644 [2024-12-13 09:37:36.754198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.644 [2024-12-13 09:37:36.754256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.644 [2024-12-13 09:37:36.754273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.644 [2024-12-13 09:37:36.754279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.644 [2024-12-13 09:37:36.754285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.644 [2024-12-13 09:37:36.754301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.644 qpair failed and we were unable to recover it. 00:26:24.644 [2024-12-13 09:37:36.764227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.644 [2024-12-13 09:37:36.764282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.644 [2024-12-13 09:37:36.764296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.644 [2024-12-13 09:37:36.764303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.644 [2024-12-13 09:37:36.764309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.644 [2024-12-13 09:37:36.764323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.644 qpair failed and we were unable to recover it. 00:26:24.644 [2024-12-13 09:37:36.774249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.644 [2024-12-13 09:37:36.774303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.644 [2024-12-13 09:37:36.774318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.644 [2024-12-13 09:37:36.774324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.644 [2024-12-13 09:37:36.774330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.644 [2024-12-13 09:37:36.774344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.644 qpair failed and we were unable to recover it. 00:26:24.644 [2024-12-13 09:37:36.784243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.644 [2024-12-13 09:37:36.784300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.644 [2024-12-13 09:37:36.784314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.644 [2024-12-13 09:37:36.784320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.644 [2024-12-13 09:37:36.784326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.644 [2024-12-13 09:37:36.784340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.644 qpair failed and we were unable to recover it. 00:26:24.644 [2024-12-13 09:37:36.794269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.644 [2024-12-13 09:37:36.794330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.644 [2024-12-13 09:37:36.794344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.644 [2024-12-13 09:37:36.794350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.644 [2024-12-13 09:37:36.794356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.644 [2024-12-13 09:37:36.794370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.644 qpair failed and we were unable to recover it. 00:26:24.644 [2024-12-13 09:37:36.804350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.644 [2024-12-13 09:37:36.804404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.644 [2024-12-13 09:37:36.804417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.644 [2024-12-13 09:37:36.804424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.644 [2024-12-13 09:37:36.804429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.644 [2024-12-13 09:37:36.804443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.644 qpair failed and we were unable to recover it. 00:26:24.644 [2024-12-13 09:37:36.814346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.644 [2024-12-13 09:37:36.814417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.644 [2024-12-13 09:37:36.814430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.644 [2024-12-13 09:37:36.814439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.644 [2024-12-13 09:37:36.814445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.644 [2024-12-13 09:37:36.814465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.644 qpair failed and we were unable to recover it. 00:26:24.644 [2024-12-13 09:37:36.824363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.644 [2024-12-13 09:37:36.824420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.644 [2024-12-13 09:37:36.824434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.644 [2024-12-13 09:37:36.824440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.644 [2024-12-13 09:37:36.824446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.644 [2024-12-13 09:37:36.824464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.644 qpair failed and we were unable to recover it. 00:26:24.644 [2024-12-13 09:37:36.834429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.644 [2024-12-13 09:37:36.834489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.644 [2024-12-13 09:37:36.834503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.644 [2024-12-13 09:37:36.834509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.644 [2024-12-13 09:37:36.834515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.644 [2024-12-13 09:37:36.834530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.644 qpair failed and we were unable to recover it. 00:26:24.644 [2024-12-13 09:37:36.844410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.644 [2024-12-13 09:37:36.844470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.644 [2024-12-13 09:37:36.844484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.645 [2024-12-13 09:37:36.844490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.645 [2024-12-13 09:37:36.844496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.645 [2024-12-13 09:37:36.844511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.645 qpair failed and we were unable to recover it. 00:26:24.645 [2024-12-13 09:37:36.854424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.645 [2024-12-13 09:37:36.854484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.645 [2024-12-13 09:37:36.854498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.645 [2024-12-13 09:37:36.854504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.645 [2024-12-13 09:37:36.854510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.645 [2024-12-13 09:37:36.854528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.645 qpair failed and we were unable to recover it. 00:26:24.645 [2024-12-13 09:37:36.864465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.645 [2024-12-13 09:37:36.864524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.645 [2024-12-13 09:37:36.864537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.645 [2024-12-13 09:37:36.864544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.645 [2024-12-13 09:37:36.864550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.645 [2024-12-13 09:37:36.864564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.645 qpair failed and we were unable to recover it. 00:26:24.645 [2024-12-13 09:37:36.874566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.645 [2024-12-13 09:37:36.874617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.645 [2024-12-13 09:37:36.874631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.645 [2024-12-13 09:37:36.874637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.645 [2024-12-13 09:37:36.874643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.645 [2024-12-13 09:37:36.874656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.645 qpair failed and we were unable to recover it. 00:26:24.645 [2024-12-13 09:37:36.884623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.645 [2024-12-13 09:37:36.884676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.645 [2024-12-13 09:37:36.884689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.645 [2024-12-13 09:37:36.884696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.645 [2024-12-13 09:37:36.884701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.645 [2024-12-13 09:37:36.884715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.645 qpair failed and we were unable to recover it. 00:26:24.645 [2024-12-13 09:37:36.894559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.645 [2024-12-13 09:37:36.894651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.645 [2024-12-13 09:37:36.894665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.645 [2024-12-13 09:37:36.894672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.645 [2024-12-13 09:37:36.894678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.645 [2024-12-13 09:37:36.894692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.645 qpair failed and we were unable to recover it. 00:26:24.645 [2024-12-13 09:37:36.904647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.645 [2024-12-13 09:37:36.904711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.645 [2024-12-13 09:37:36.904725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.645 [2024-12-13 09:37:36.904731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.645 [2024-12-13 09:37:36.904737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.645 [2024-12-13 09:37:36.904751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.645 qpair failed and we were unable to recover it. 00:26:24.645 [2024-12-13 09:37:36.914665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.645 [2024-12-13 09:37:36.914717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.645 [2024-12-13 09:37:36.914730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.645 [2024-12-13 09:37:36.914737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.645 [2024-12-13 09:37:36.914743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.645 [2024-12-13 09:37:36.914756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.645 qpair failed and we were unable to recover it. 00:26:24.645 [2024-12-13 09:37:36.924677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.645 [2024-12-13 09:37:36.924758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.645 [2024-12-13 09:37:36.924772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.645 [2024-12-13 09:37:36.924778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.645 [2024-12-13 09:37:36.924784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.645 [2024-12-13 09:37:36.924798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.645 qpair failed and we were unable to recover it. 00:26:24.645 [2024-12-13 09:37:36.934715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.645 [2024-12-13 09:37:36.934771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.645 [2024-12-13 09:37:36.934784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.645 [2024-12-13 09:37:36.934790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.645 [2024-12-13 09:37:36.934796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.645 [2024-12-13 09:37:36.934809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.645 qpair failed and we were unable to recover it. 00:26:24.645 [2024-12-13 09:37:36.944769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.645 [2024-12-13 09:37:36.944826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.645 [2024-12-13 09:37:36.944839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.645 [2024-12-13 09:37:36.944849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.645 [2024-12-13 09:37:36.944854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.645 [2024-12-13 09:37:36.944869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.645 qpair failed and we were unable to recover it. 00:26:24.645 [2024-12-13 09:37:36.954786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.645 [2024-12-13 09:37:36.954871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.645 [2024-12-13 09:37:36.954887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.645 [2024-12-13 09:37:36.954893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.645 [2024-12-13 09:37:36.954899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.645 [2024-12-13 09:37:36.954914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.645 qpair failed and we were unable to recover it. 00:26:24.645 [2024-12-13 09:37:36.964785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.645 [2024-12-13 09:37:36.964841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.645 [2024-12-13 09:37:36.964854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.645 [2024-12-13 09:37:36.964861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.645 [2024-12-13 09:37:36.964867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.646 [2024-12-13 09:37:36.964880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.646 qpair failed and we were unable to recover it. 00:26:24.646 [2024-12-13 09:37:36.974841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.646 [2024-12-13 09:37:36.974897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.646 [2024-12-13 09:37:36.974910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.646 [2024-12-13 09:37:36.974916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.646 [2024-12-13 09:37:36.974922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.646 [2024-12-13 09:37:36.974936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.646 qpair failed and we were unable to recover it. 00:26:24.646 [2024-12-13 09:37:36.984864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.646 [2024-12-13 09:37:36.984921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.646 [2024-12-13 09:37:36.984935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.646 [2024-12-13 09:37:36.984941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.646 [2024-12-13 09:37:36.984947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.646 [2024-12-13 09:37:36.984964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.646 qpair failed and we were unable to recover it. 00:26:24.646 [2024-12-13 09:37:36.994886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.646 [2024-12-13 09:37:36.994943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.646 [2024-12-13 09:37:36.994957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.646 [2024-12-13 09:37:36.994963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.646 [2024-12-13 09:37:36.994969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.646 [2024-12-13 09:37:36.994983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.646 qpair failed and we were unable to recover it. 00:26:24.646 [2024-12-13 09:37:37.004910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.646 [2024-12-13 09:37:37.004970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.646 [2024-12-13 09:37:37.004985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.646 [2024-12-13 09:37:37.004992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.646 [2024-12-13 09:37:37.004998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.646 [2024-12-13 09:37:37.005013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.646 qpair failed and we were unable to recover it. 00:26:24.906 [2024-12-13 09:37:37.014934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.906 [2024-12-13 09:37:37.014991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.906 [2024-12-13 09:37:37.015007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.906 [2024-12-13 09:37:37.015015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.906 [2024-12-13 09:37:37.015020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.906 [2024-12-13 09:37:37.015037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.906 qpair failed and we were unable to recover it. 00:26:24.907 [2024-12-13 09:37:37.024977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.907 [2024-12-13 09:37:37.025036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.907 [2024-12-13 09:37:37.025051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.907 [2024-12-13 09:37:37.025057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.907 [2024-12-13 09:37:37.025063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.907 [2024-12-13 09:37:37.025078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.907 qpair failed and we were unable to recover it. 00:26:24.907 [2024-12-13 09:37:37.035000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.907 [2024-12-13 09:37:37.035062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.907 [2024-12-13 09:37:37.035076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.907 [2024-12-13 09:37:37.035082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.907 [2024-12-13 09:37:37.035088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.907 [2024-12-13 09:37:37.035103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.907 qpair failed and we were unable to recover it. 00:26:24.907 [2024-12-13 09:37:37.045024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.907 [2024-12-13 09:37:37.045106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.907 [2024-12-13 09:37:37.045120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.907 [2024-12-13 09:37:37.045127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.907 [2024-12-13 09:37:37.045133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.907 [2024-12-13 09:37:37.045147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.907 qpair failed and we were unable to recover it. 00:26:24.907 [2024-12-13 09:37:37.054994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.907 [2024-12-13 09:37:37.055045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.907 [2024-12-13 09:37:37.055059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.907 [2024-12-13 09:37:37.055065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.907 [2024-12-13 09:37:37.055071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.907 [2024-12-13 09:37:37.055085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.907 qpair failed and we were unable to recover it. 00:26:24.907 [2024-12-13 09:37:37.065087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.907 [2024-12-13 09:37:37.065145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.907 [2024-12-13 09:37:37.065158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.907 [2024-12-13 09:37:37.065165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.907 [2024-12-13 09:37:37.065171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.907 [2024-12-13 09:37:37.065184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.907 qpair failed and we were unable to recover it. 00:26:24.907 [2024-12-13 09:37:37.075111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.907 [2024-12-13 09:37:37.075165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.907 [2024-12-13 09:37:37.075178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.907 [2024-12-13 09:37:37.075187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.907 [2024-12-13 09:37:37.075193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.907 [2024-12-13 09:37:37.075208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.907 qpair failed and we were unable to recover it. 00:26:24.907 [2024-12-13 09:37:37.085168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.907 [2024-12-13 09:37:37.085223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.907 [2024-12-13 09:37:37.085236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.907 [2024-12-13 09:37:37.085242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.907 [2024-12-13 09:37:37.085248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.907 [2024-12-13 09:37:37.085262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.907 qpair failed and we were unable to recover it. 00:26:24.907 [2024-12-13 09:37:37.095196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.907 [2024-12-13 09:37:37.095279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.907 [2024-12-13 09:37:37.095293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.907 [2024-12-13 09:37:37.095300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.907 [2024-12-13 09:37:37.095306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.907 [2024-12-13 09:37:37.095320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.907 qpair failed and we were unable to recover it. 00:26:24.907 [2024-12-13 09:37:37.105213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.907 [2024-12-13 09:37:37.105274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.907 [2024-12-13 09:37:37.105287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.907 [2024-12-13 09:37:37.105294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.907 [2024-12-13 09:37:37.105300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.907 [2024-12-13 09:37:37.105313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.907 qpair failed and we were unable to recover it. 00:26:24.907 [2024-12-13 09:37:37.115234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.907 [2024-12-13 09:37:37.115293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.907 [2024-12-13 09:37:37.115307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.907 [2024-12-13 09:37:37.115313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.907 [2024-12-13 09:37:37.115319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.907 [2024-12-13 09:37:37.115336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.907 qpair failed and we were unable to recover it. 00:26:24.907 [2024-12-13 09:37:37.125252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.907 [2024-12-13 09:37:37.125307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.907 [2024-12-13 09:37:37.125321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.907 [2024-12-13 09:37:37.125327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.907 [2024-12-13 09:37:37.125333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.907 [2024-12-13 09:37:37.125346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.907 qpair failed and we were unable to recover it. 00:26:24.907 [2024-12-13 09:37:37.135283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.907 [2024-12-13 09:37:37.135341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.907 [2024-12-13 09:37:37.135355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.907 [2024-12-13 09:37:37.135361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.907 [2024-12-13 09:37:37.135366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.907 [2024-12-13 09:37:37.135380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.907 qpair failed and we were unable to recover it. 00:26:24.907 [2024-12-13 09:37:37.145330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.908 [2024-12-13 09:37:37.145386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.908 [2024-12-13 09:37:37.145399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.908 [2024-12-13 09:37:37.145405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.908 [2024-12-13 09:37:37.145411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.908 [2024-12-13 09:37:37.145424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.908 qpair failed and we were unable to recover it. 00:26:24.908 [2024-12-13 09:37:37.155287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.908 [2024-12-13 09:37:37.155341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.908 [2024-12-13 09:37:37.155354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.908 [2024-12-13 09:37:37.155361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.908 [2024-12-13 09:37:37.155367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.908 [2024-12-13 09:37:37.155381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.908 qpair failed and we were unable to recover it. 00:26:24.908 [2024-12-13 09:37:37.165377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.908 [2024-12-13 09:37:37.165462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.908 [2024-12-13 09:37:37.165477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.908 [2024-12-13 09:37:37.165484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.908 [2024-12-13 09:37:37.165490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.908 [2024-12-13 09:37:37.165504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.908 qpair failed and we were unable to recover it. 00:26:24.908 [2024-12-13 09:37:37.175391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.908 [2024-12-13 09:37:37.175468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.908 [2024-12-13 09:37:37.175482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.908 [2024-12-13 09:37:37.175489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.908 [2024-12-13 09:37:37.175494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.908 [2024-12-13 09:37:37.175508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.908 qpair failed and we were unable to recover it. 00:26:24.908 [2024-12-13 09:37:37.185437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.908 [2024-12-13 09:37:37.185501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.908 [2024-12-13 09:37:37.185515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.908 [2024-12-13 09:37:37.185521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.908 [2024-12-13 09:37:37.185527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.908 [2024-12-13 09:37:37.185541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.908 qpair failed and we were unable to recover it. 00:26:24.908 [2024-12-13 09:37:37.195461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.908 [2024-12-13 09:37:37.195515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.908 [2024-12-13 09:37:37.195528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.908 [2024-12-13 09:37:37.195535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.908 [2024-12-13 09:37:37.195541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.908 [2024-12-13 09:37:37.195556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.908 qpair failed and we were unable to recover it. 00:26:24.908 [2024-12-13 09:37:37.205523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.908 [2024-12-13 09:37:37.205607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.908 [2024-12-13 09:37:37.205621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.908 [2024-12-13 09:37:37.205631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.908 [2024-12-13 09:37:37.205637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.908 [2024-12-13 09:37:37.205652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.908 qpair failed and we were unable to recover it. 00:26:24.908 [2024-12-13 09:37:37.215507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.908 [2024-12-13 09:37:37.215564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.908 [2024-12-13 09:37:37.215577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.908 [2024-12-13 09:37:37.215583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.908 [2024-12-13 09:37:37.215588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.908 [2024-12-13 09:37:37.215602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.908 qpair failed and we were unable to recover it. 00:26:24.908 [2024-12-13 09:37:37.225557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.908 [2024-12-13 09:37:37.225612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.908 [2024-12-13 09:37:37.225624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.908 [2024-12-13 09:37:37.225631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.908 [2024-12-13 09:37:37.225637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.908 [2024-12-13 09:37:37.225650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.908 qpair failed and we were unable to recover it. 00:26:24.908 [2024-12-13 09:37:37.235579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.908 [2024-12-13 09:37:37.235680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.908 [2024-12-13 09:37:37.235694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.908 [2024-12-13 09:37:37.235700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.908 [2024-12-13 09:37:37.235706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.908 [2024-12-13 09:37:37.235720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.908 qpair failed and we were unable to recover it. 00:26:24.908 [2024-12-13 09:37:37.245617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.908 [2024-12-13 09:37:37.245676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.908 [2024-12-13 09:37:37.245689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.908 [2024-12-13 09:37:37.245695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.908 [2024-12-13 09:37:37.245700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.908 [2024-12-13 09:37:37.245717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.908 qpair failed and we were unable to recover it. 00:26:24.908 [2024-12-13 09:37:37.255629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.908 [2024-12-13 09:37:37.255683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.908 [2024-12-13 09:37:37.255696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.908 [2024-12-13 09:37:37.255703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.908 [2024-12-13 09:37:37.255708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.908 [2024-12-13 09:37:37.255722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.908 qpair failed and we were unable to recover it. 00:26:24.908 [2024-12-13 09:37:37.265684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.908 [2024-12-13 09:37:37.265742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.909 [2024-12-13 09:37:37.265755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.909 [2024-12-13 09:37:37.265761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.909 [2024-12-13 09:37:37.265767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:24.909 [2024-12-13 09:37:37.265781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.909 qpair failed and we were unable to recover it. 00:26:25.169 [2024-12-13 09:37:37.275737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.169 [2024-12-13 09:37:37.275815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.169 [2024-12-13 09:37:37.275833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.169 [2024-12-13 09:37:37.275840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.169 [2024-12-13 09:37:37.275846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.169 [2024-12-13 09:37:37.275861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.169 qpair failed and we were unable to recover it. 00:26:25.169 [2024-12-13 09:37:37.285752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.169 [2024-12-13 09:37:37.285806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.169 [2024-12-13 09:37:37.285822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.169 [2024-12-13 09:37:37.285829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.169 [2024-12-13 09:37:37.285835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.169 [2024-12-13 09:37:37.285850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.169 qpair failed and we were unable to recover it. 00:26:25.169 [2024-12-13 09:37:37.295756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.169 [2024-12-13 09:37:37.295817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.169 [2024-12-13 09:37:37.295831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.169 [2024-12-13 09:37:37.295837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.169 [2024-12-13 09:37:37.295843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.169 [2024-12-13 09:37:37.295856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.169 qpair failed and we were unable to recover it. 00:26:25.169 [2024-12-13 09:37:37.305796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.169 [2024-12-13 09:37:37.305854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.169 [2024-12-13 09:37:37.305868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.169 [2024-12-13 09:37:37.305874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.169 [2024-12-13 09:37:37.305880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.169 [2024-12-13 09:37:37.305894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.169 qpair failed and we were unable to recover it. 00:26:25.169 [2024-12-13 09:37:37.315816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.169 [2024-12-13 09:37:37.315902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.169 [2024-12-13 09:37:37.315917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.169 [2024-12-13 09:37:37.315924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.169 [2024-12-13 09:37:37.315929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.169 [2024-12-13 09:37:37.315943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.169 qpair failed and we were unable to recover it. 00:26:25.169 [2024-12-13 09:37:37.325853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.169 [2024-12-13 09:37:37.325908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.169 [2024-12-13 09:37:37.325921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.169 [2024-12-13 09:37:37.325928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.169 [2024-12-13 09:37:37.325933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.169 [2024-12-13 09:37:37.325947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.170 qpair failed and we were unable to recover it. 00:26:25.170 [2024-12-13 09:37:37.335908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.170 [2024-12-13 09:37:37.335964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.170 [2024-12-13 09:37:37.335978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.170 [2024-12-13 09:37:37.335990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.170 [2024-12-13 09:37:37.335996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.170 [2024-12-13 09:37:37.336010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.170 qpair failed and we were unable to recover it. 00:26:25.170 [2024-12-13 09:37:37.345918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.170 [2024-12-13 09:37:37.345975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.170 [2024-12-13 09:37:37.345988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.170 [2024-12-13 09:37:37.345994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.170 [2024-12-13 09:37:37.346000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.170 [2024-12-13 09:37:37.346014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.170 qpair failed and we were unable to recover it. 00:26:25.170 [2024-12-13 09:37:37.355909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.170 [2024-12-13 09:37:37.356001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.170 [2024-12-13 09:37:37.356015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.170 [2024-12-13 09:37:37.356022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.170 [2024-12-13 09:37:37.356028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.170 [2024-12-13 09:37:37.356042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.170 qpair failed and we were unable to recover it. 00:26:25.170 [2024-12-13 09:37:37.366003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.170 [2024-12-13 09:37:37.366061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.170 [2024-12-13 09:37:37.366074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.170 [2024-12-13 09:37:37.366080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.170 [2024-12-13 09:37:37.366086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.170 [2024-12-13 09:37:37.366100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.170 qpair failed and we were unable to recover it. 00:26:25.170 [2024-12-13 09:37:37.375919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.170 [2024-12-13 09:37:37.375976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.170 [2024-12-13 09:37:37.375989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.170 [2024-12-13 09:37:37.375996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.170 [2024-12-13 09:37:37.376002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.170 [2024-12-13 09:37:37.376019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.170 qpair failed and we were unable to recover it. 00:26:25.170 [2024-12-13 09:37:37.386043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.170 [2024-12-13 09:37:37.386109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.170 [2024-12-13 09:37:37.386122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.170 [2024-12-13 09:37:37.386128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.170 [2024-12-13 09:37:37.386133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.170 [2024-12-13 09:37:37.386148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.170 qpair failed and we were unable to recover it. 00:26:25.170 [2024-12-13 09:37:37.396053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.170 [2024-12-13 09:37:37.396125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.170 [2024-12-13 09:37:37.396138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.170 [2024-12-13 09:37:37.396145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.170 [2024-12-13 09:37:37.396151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.170 [2024-12-13 09:37:37.396165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.170 qpair failed and we were unable to recover it. 00:26:25.170 [2024-12-13 09:37:37.406077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.170 [2024-12-13 09:37:37.406136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.170 [2024-12-13 09:37:37.406149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.170 [2024-12-13 09:37:37.406156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.170 [2024-12-13 09:37:37.406161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.170 [2024-12-13 09:37:37.406176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.170 qpair failed and we were unable to recover it. 00:26:25.170 [2024-12-13 09:37:37.416098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.170 [2024-12-13 09:37:37.416155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.170 [2024-12-13 09:37:37.416169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.170 [2024-12-13 09:37:37.416176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.170 [2024-12-13 09:37:37.416182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.170 [2024-12-13 09:37:37.416196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.170 qpair failed and we were unable to recover it. 00:26:25.170 [2024-12-13 09:37:37.426144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.170 [2024-12-13 09:37:37.426204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.170 [2024-12-13 09:37:37.426217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.170 [2024-12-13 09:37:37.426224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.170 [2024-12-13 09:37:37.426230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.170 [2024-12-13 09:37:37.426244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.170 qpair failed and we were unable to recover it. 00:26:25.170 [2024-12-13 09:37:37.436177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.170 [2024-12-13 09:37:37.436234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.170 [2024-12-13 09:37:37.436247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.170 [2024-12-13 09:37:37.436254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.170 [2024-12-13 09:37:37.436260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.170 [2024-12-13 09:37:37.436274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.170 qpair failed and we were unable to recover it. 00:26:25.170 [2024-12-13 09:37:37.446209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.170 [2024-12-13 09:37:37.446263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.170 [2024-12-13 09:37:37.446278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.170 [2024-12-13 09:37:37.446284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.170 [2024-12-13 09:37:37.446289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.170 [2024-12-13 09:37:37.446303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.170 qpair failed and we were unable to recover it. 00:26:25.171 [2024-12-13 09:37:37.456225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.171 [2024-12-13 09:37:37.456278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.171 [2024-12-13 09:37:37.456291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.171 [2024-12-13 09:37:37.456298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.171 [2024-12-13 09:37:37.456304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.171 [2024-12-13 09:37:37.456318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.171 qpair failed and we were unable to recover it. 00:26:25.171 [2024-12-13 09:37:37.466267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.171 [2024-12-13 09:37:37.466322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.171 [2024-12-13 09:37:37.466335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.171 [2024-12-13 09:37:37.466344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.171 [2024-12-13 09:37:37.466350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.171 [2024-12-13 09:37:37.466364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.171 qpair failed and we were unable to recover it. 00:26:25.171 [2024-12-13 09:37:37.476299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.171 [2024-12-13 09:37:37.476380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.171 [2024-12-13 09:37:37.476394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.171 [2024-12-13 09:37:37.476401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.171 [2024-12-13 09:37:37.476407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.171 [2024-12-13 09:37:37.476421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.171 qpair failed and we were unable to recover it. 00:26:25.171 [2024-12-13 09:37:37.486308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.171 [2024-12-13 09:37:37.486360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.171 [2024-12-13 09:37:37.486374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.171 [2024-12-13 09:37:37.486380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.171 [2024-12-13 09:37:37.486386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.171 [2024-12-13 09:37:37.486400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.171 qpair failed and we were unable to recover it. 00:26:25.171 [2024-12-13 09:37:37.496341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.171 [2024-12-13 09:37:37.496394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.171 [2024-12-13 09:37:37.496408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.171 [2024-12-13 09:37:37.496414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.171 [2024-12-13 09:37:37.496420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.171 [2024-12-13 09:37:37.496433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.171 qpair failed and we were unable to recover it. 00:26:25.171 [2024-12-13 09:37:37.506375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.171 [2024-12-13 09:37:37.506458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.171 [2024-12-13 09:37:37.506472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.171 [2024-12-13 09:37:37.506479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.171 [2024-12-13 09:37:37.506484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.171 [2024-12-13 09:37:37.506501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.171 qpair failed and we were unable to recover it. 00:26:25.171 [2024-12-13 09:37:37.516403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.171 [2024-12-13 09:37:37.516462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.171 [2024-12-13 09:37:37.516475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.171 [2024-12-13 09:37:37.516482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.171 [2024-12-13 09:37:37.516487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.171 [2024-12-13 09:37:37.516502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.171 qpair failed and we were unable to recover it. 00:26:25.171 [2024-12-13 09:37:37.526420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.171 [2024-12-13 09:37:37.526476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.171 [2024-12-13 09:37:37.526489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.171 [2024-12-13 09:37:37.526496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.171 [2024-12-13 09:37:37.526501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.171 [2024-12-13 09:37:37.526515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.171 qpair failed and we were unable to recover it. 00:26:25.432 [2024-12-13 09:37:37.536460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.432 [2024-12-13 09:37:37.536520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.432 [2024-12-13 09:37:37.536537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.432 [2024-12-13 09:37:37.536543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.432 [2024-12-13 09:37:37.536549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.432 [2024-12-13 09:37:37.536565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.432 qpair failed and we were unable to recover it. 00:26:25.432 [2024-12-13 09:37:37.546512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.432 [2024-12-13 09:37:37.546590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.432 [2024-12-13 09:37:37.546607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.432 [2024-12-13 09:37:37.546614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.432 [2024-12-13 09:37:37.546619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.432 [2024-12-13 09:37:37.546636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.432 qpair failed and we were unable to recover it. 00:26:25.432 [2024-12-13 09:37:37.556542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.432 [2024-12-13 09:37:37.556599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.432 [2024-12-13 09:37:37.556613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.432 [2024-12-13 09:37:37.556620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.432 [2024-12-13 09:37:37.556625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.432 [2024-12-13 09:37:37.556640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.432 qpair failed and we were unable to recover it. 00:26:25.432 [2024-12-13 09:37:37.566559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.432 [2024-12-13 09:37:37.566620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.432 [2024-12-13 09:37:37.566634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.432 [2024-12-13 09:37:37.566640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.432 [2024-12-13 09:37:37.566646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.432 [2024-12-13 09:37:37.566661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.432 qpair failed and we were unable to recover it. 00:26:25.432 [2024-12-13 09:37:37.576589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.432 [2024-12-13 09:37:37.576655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.432 [2024-12-13 09:37:37.576668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.432 [2024-12-13 09:37:37.576674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.432 [2024-12-13 09:37:37.576680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.432 [2024-12-13 09:37:37.576694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.432 qpair failed and we were unable to recover it. 00:26:25.432 [2024-12-13 09:37:37.586630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.432 [2024-12-13 09:37:37.586686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.432 [2024-12-13 09:37:37.586701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.432 [2024-12-13 09:37:37.586708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.432 [2024-12-13 09:37:37.586714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.432 [2024-12-13 09:37:37.586729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.432 qpair failed and we were unable to recover it. 00:26:25.432 [2024-12-13 09:37:37.596669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.432 [2024-12-13 09:37:37.596729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.432 [2024-12-13 09:37:37.596743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.432 [2024-12-13 09:37:37.596752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.432 [2024-12-13 09:37:37.596758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.432 [2024-12-13 09:37:37.596773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.432 qpair failed and we were unable to recover it. 00:26:25.432 [2024-12-13 09:37:37.606604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.432 [2024-12-13 09:37:37.606660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.432 [2024-12-13 09:37:37.606674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.432 [2024-12-13 09:37:37.606681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.433 [2024-12-13 09:37:37.606688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.433 [2024-12-13 09:37:37.606702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.433 qpair failed and we were unable to recover it. 00:26:25.433 [2024-12-13 09:37:37.616741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.433 [2024-12-13 09:37:37.616819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.433 [2024-12-13 09:37:37.616833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.433 [2024-12-13 09:37:37.616839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.433 [2024-12-13 09:37:37.616845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.433 [2024-12-13 09:37:37.616859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.433 qpair failed and we were unable to recover it. 00:26:25.433 [2024-12-13 09:37:37.626723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.433 [2024-12-13 09:37:37.626780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.433 [2024-12-13 09:37:37.626794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.433 [2024-12-13 09:37:37.626801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.433 [2024-12-13 09:37:37.626806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.433 [2024-12-13 09:37:37.626820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.433 qpair failed and we were unable to recover it. 00:26:25.433 [2024-12-13 09:37:37.636806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.433 [2024-12-13 09:37:37.636859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.433 [2024-12-13 09:37:37.636872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.433 [2024-12-13 09:37:37.636879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.433 [2024-12-13 09:37:37.636885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.433 [2024-12-13 09:37:37.636902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.433 qpair failed and we were unable to recover it. 00:26:25.433 [2024-12-13 09:37:37.646831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.433 [2024-12-13 09:37:37.646880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.433 [2024-12-13 09:37:37.646894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.433 [2024-12-13 09:37:37.646900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.433 [2024-12-13 09:37:37.646906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.433 [2024-12-13 09:37:37.646920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.433 qpair failed and we were unable to recover it. 00:26:25.433 [2024-12-13 09:37:37.656817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.433 [2024-12-13 09:37:37.656873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.433 [2024-12-13 09:37:37.656887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.433 [2024-12-13 09:37:37.656893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.433 [2024-12-13 09:37:37.656899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.433 [2024-12-13 09:37:37.656912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.433 qpair failed and we were unable to recover it. 00:26:25.433 [2024-12-13 09:37:37.666854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.433 [2024-12-13 09:37:37.666912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.433 [2024-12-13 09:37:37.666925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.433 [2024-12-13 09:37:37.666932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.433 [2024-12-13 09:37:37.666938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.433 [2024-12-13 09:37:37.666952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.433 qpair failed and we were unable to recover it. 00:26:25.433 [2024-12-13 09:37:37.676868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.433 [2024-12-13 09:37:37.676924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.433 [2024-12-13 09:37:37.676937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.433 [2024-12-13 09:37:37.676944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.433 [2024-12-13 09:37:37.676950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.433 [2024-12-13 09:37:37.676964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.433 qpair failed and we were unable to recover it. 00:26:25.433 [2024-12-13 09:37:37.686887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.433 [2024-12-13 09:37:37.686948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.433 [2024-12-13 09:37:37.686961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.433 [2024-12-13 09:37:37.686968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.433 [2024-12-13 09:37:37.686973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.433 [2024-12-13 09:37:37.686986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.433 qpair failed and we were unable to recover it. 00:26:25.433 [2024-12-13 09:37:37.696917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.433 [2024-12-13 09:37:37.696999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.433 [2024-12-13 09:37:37.697013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.433 [2024-12-13 09:37:37.697020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.433 [2024-12-13 09:37:37.697026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.433 [2024-12-13 09:37:37.697040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.433 qpair failed and we were unable to recover it. 00:26:25.433 [2024-12-13 09:37:37.706945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.433 [2024-12-13 09:37:37.707004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.433 [2024-12-13 09:37:37.707017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.433 [2024-12-13 09:37:37.707024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.433 [2024-12-13 09:37:37.707030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.433 [2024-12-13 09:37:37.707043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.433 qpair failed and we were unable to recover it. 00:26:25.433 [2024-12-13 09:37:37.716992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.433 [2024-12-13 09:37:37.717048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.433 [2024-12-13 09:37:37.717061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.433 [2024-12-13 09:37:37.717068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.433 [2024-12-13 09:37:37.717073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.433 [2024-12-13 09:37:37.717087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.433 qpair failed and we were unable to recover it. 00:26:25.433 [2024-12-13 09:37:37.727046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.433 [2024-12-13 09:37:37.727108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.433 [2024-12-13 09:37:37.727121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.433 [2024-12-13 09:37:37.727130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.433 [2024-12-13 09:37:37.727136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.434 [2024-12-13 09:37:37.727150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.434 qpair failed and we were unable to recover it. 00:26:25.434 [2024-12-13 09:37:37.737035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.434 [2024-12-13 09:37:37.737090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.434 [2024-12-13 09:37:37.737103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.434 [2024-12-13 09:37:37.737109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.434 [2024-12-13 09:37:37.737115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.434 [2024-12-13 09:37:37.737129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.434 qpair failed and we were unable to recover it. 00:26:25.434 [2024-12-13 09:37:37.747089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.434 [2024-12-13 09:37:37.747146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.434 [2024-12-13 09:37:37.747160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.434 [2024-12-13 09:37:37.747166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.434 [2024-12-13 09:37:37.747172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.434 [2024-12-13 09:37:37.747186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.434 qpair failed and we were unable to recover it. 00:26:25.434 [2024-12-13 09:37:37.757116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.434 [2024-12-13 09:37:37.757169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.434 [2024-12-13 09:37:37.757182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.434 [2024-12-13 09:37:37.757188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.434 [2024-12-13 09:37:37.757194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.434 [2024-12-13 09:37:37.757208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.434 qpair failed and we were unable to recover it. 00:26:25.434 [2024-12-13 09:37:37.767163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.434 [2024-12-13 09:37:37.767218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.434 [2024-12-13 09:37:37.767231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.434 [2024-12-13 09:37:37.767238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.434 [2024-12-13 09:37:37.767243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.434 [2024-12-13 09:37:37.767260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.434 qpair failed and we were unable to recover it. 00:26:25.434 [2024-12-13 09:37:37.777172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.434 [2024-12-13 09:37:37.777227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.434 [2024-12-13 09:37:37.777241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.434 [2024-12-13 09:37:37.777247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.434 [2024-12-13 09:37:37.777253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.434 [2024-12-13 09:37:37.777267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.434 qpair failed and we were unable to recover it. 00:26:25.434 [2024-12-13 09:37:37.787144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.434 [2024-12-13 09:37:37.787211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.434 [2024-12-13 09:37:37.787224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.434 [2024-12-13 09:37:37.787231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.434 [2024-12-13 09:37:37.787237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.434 [2024-12-13 09:37:37.787252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.434 qpair failed and we were unable to recover it. 00:26:25.434 [2024-12-13 09:37:37.797240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.434 [2024-12-13 09:37:37.797301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.434 [2024-12-13 09:37:37.797318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.434 [2024-12-13 09:37:37.797326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.434 [2024-12-13 09:37:37.797332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.434 [2024-12-13 09:37:37.797347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.434 qpair failed and we were unable to recover it. 00:26:25.695 [2024-12-13 09:37:37.807197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.695 [2024-12-13 09:37:37.807247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.695 [2024-12-13 09:37:37.807264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.695 [2024-12-13 09:37:37.807271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.695 [2024-12-13 09:37:37.807277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.695 [2024-12-13 09:37:37.807292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.695 qpair failed and we were unable to recover it. 00:26:25.695 [2024-12-13 09:37:37.817296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.695 [2024-12-13 09:37:37.817358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.695 [2024-12-13 09:37:37.817373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.695 [2024-12-13 09:37:37.817380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.695 [2024-12-13 09:37:37.817385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.695 [2024-12-13 09:37:37.817400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.695 qpair failed and we were unable to recover it. 00:26:25.695 [2024-12-13 09:37:37.827393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.695 [2024-12-13 09:37:37.827455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.695 [2024-12-13 09:37:37.827469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.695 [2024-12-13 09:37:37.827476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.695 [2024-12-13 09:37:37.827482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.695 [2024-12-13 09:37:37.827497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.695 qpair failed and we were unable to recover it. 00:26:25.695 [2024-12-13 09:37:37.837359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.695 [2024-12-13 09:37:37.837417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.695 [2024-12-13 09:37:37.837430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.695 [2024-12-13 09:37:37.837437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.695 [2024-12-13 09:37:37.837443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.695 [2024-12-13 09:37:37.837461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.695 qpair failed and we were unable to recover it. 00:26:25.695 [2024-12-13 09:37:37.847393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.695 [2024-12-13 09:37:37.847451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.695 [2024-12-13 09:37:37.847465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.695 [2024-12-13 09:37:37.847472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.695 [2024-12-13 09:37:37.847478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.695 [2024-12-13 09:37:37.847493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.695 qpair failed and we were unable to recover it. 00:26:25.695 [2024-12-13 09:37:37.857397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.695 [2024-12-13 09:37:37.857500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.695 [2024-12-13 09:37:37.857514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.695 [2024-12-13 09:37:37.857524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.695 [2024-12-13 09:37:37.857530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.695 [2024-12-13 09:37:37.857545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.695 qpair failed and we were unable to recover it. 00:26:25.695 [2024-12-13 09:37:37.867481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.695 [2024-12-13 09:37:37.867552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.695 [2024-12-13 09:37:37.867565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.695 [2024-12-13 09:37:37.867571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.695 [2024-12-13 09:37:37.867578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.695 [2024-12-13 09:37:37.867592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.695 qpair failed and we were unable to recover it. 00:26:25.695 [2024-12-13 09:37:37.877485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.695 [2024-12-13 09:37:37.877539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.695 [2024-12-13 09:37:37.877553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.695 [2024-12-13 09:37:37.877560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.695 [2024-12-13 09:37:37.877565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.695 [2024-12-13 09:37:37.877580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.695 qpair failed and we were unable to recover it. 00:26:25.695 [2024-12-13 09:37:37.887508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.695 [2024-12-13 09:37:37.887562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.695 [2024-12-13 09:37:37.887576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.695 [2024-12-13 09:37:37.887582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.695 [2024-12-13 09:37:37.887588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.695 [2024-12-13 09:37:37.887602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.695 qpair failed and we were unable to recover it. 00:26:25.695 [2024-12-13 09:37:37.897534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.695 [2024-12-13 09:37:37.897591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.695 [2024-12-13 09:37:37.897604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.695 [2024-12-13 09:37:37.897610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.695 [2024-12-13 09:37:37.897616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.695 [2024-12-13 09:37:37.897630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.695 qpair failed and we were unable to recover it. 00:26:25.695 [2024-12-13 09:37:37.907574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.695 [2024-12-13 09:37:37.907633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.696 [2024-12-13 09:37:37.907646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.696 [2024-12-13 09:37:37.907653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.696 [2024-12-13 09:37:37.907658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.696 [2024-12-13 09:37:37.907672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.696 qpair failed and we were unable to recover it. 00:26:25.696 [2024-12-13 09:37:37.917600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.696 [2024-12-13 09:37:37.917659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.696 [2024-12-13 09:37:37.917673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.696 [2024-12-13 09:37:37.917679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.696 [2024-12-13 09:37:37.917685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.696 [2024-12-13 09:37:37.917699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.696 qpair failed and we were unable to recover it. 00:26:25.696 [2024-12-13 09:37:37.927677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.696 [2024-12-13 09:37:37.927734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.696 [2024-12-13 09:37:37.927748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.696 [2024-12-13 09:37:37.927755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.696 [2024-12-13 09:37:37.927760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.696 [2024-12-13 09:37:37.927775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.696 qpair failed and we were unable to recover it. 00:26:25.696 [2024-12-13 09:37:37.937692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.696 [2024-12-13 09:37:37.937745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.696 [2024-12-13 09:37:37.937760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.696 [2024-12-13 09:37:37.937766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.696 [2024-12-13 09:37:37.937772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.696 [2024-12-13 09:37:37.937787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.696 qpair failed and we were unable to recover it. 00:26:25.696 [2024-12-13 09:37:37.947696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.696 [2024-12-13 09:37:37.947760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.696 [2024-12-13 09:37:37.947774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.696 [2024-12-13 09:37:37.947780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.696 [2024-12-13 09:37:37.947786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.696 [2024-12-13 09:37:37.947800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.696 qpair failed and we were unable to recover it. 00:26:25.696 [2024-12-13 09:37:37.957710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.696 [2024-12-13 09:37:37.957766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.696 [2024-12-13 09:37:37.957779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.696 [2024-12-13 09:37:37.957786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.696 [2024-12-13 09:37:37.957791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.696 [2024-12-13 09:37:37.957805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.696 qpair failed and we were unable to recover it. 00:26:25.696 [2024-12-13 09:37:37.967683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.696 [2024-12-13 09:37:37.967743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.696 [2024-12-13 09:37:37.967757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.696 [2024-12-13 09:37:37.967763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.696 [2024-12-13 09:37:37.967769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.696 [2024-12-13 09:37:37.967783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.696 qpair failed and we were unable to recover it. 00:26:25.696 [2024-12-13 09:37:37.977762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.696 [2024-12-13 09:37:37.977819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.696 [2024-12-13 09:37:37.977833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.696 [2024-12-13 09:37:37.977840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.696 [2024-12-13 09:37:37.977845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.696 [2024-12-13 09:37:37.977860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.696 qpair failed and we were unable to recover it. 00:26:25.696 [2024-12-13 09:37:37.987793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.696 [2024-12-13 09:37:37.987854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.696 [2024-12-13 09:37:37.987867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.696 [2024-12-13 09:37:37.987877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.696 [2024-12-13 09:37:37.987883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.696 [2024-12-13 09:37:37.987897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.696 qpair failed and we were unable to recover it. 00:26:25.696 [2024-12-13 09:37:37.997871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.696 [2024-12-13 09:37:37.997932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.696 [2024-12-13 09:37:37.997946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.696 [2024-12-13 09:37:37.997952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.696 [2024-12-13 09:37:37.997958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.696 [2024-12-13 09:37:37.997972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.696 qpair failed and we were unable to recover it. 00:26:25.696 [2024-12-13 09:37:38.007789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.696 [2024-12-13 09:37:38.007843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.696 [2024-12-13 09:37:38.007856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.696 [2024-12-13 09:37:38.007863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.696 [2024-12-13 09:37:38.007868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.696 [2024-12-13 09:37:38.007883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.696 qpair failed and we were unable to recover it. 00:26:25.696 [2024-12-13 09:37:38.017860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.696 [2024-12-13 09:37:38.017917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.696 [2024-12-13 09:37:38.017931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.696 [2024-12-13 09:37:38.017937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.696 [2024-12-13 09:37:38.017943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.696 [2024-12-13 09:37:38.017957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.696 qpair failed and we were unable to recover it. 00:26:25.696 [2024-12-13 09:37:38.027909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.696 [2024-12-13 09:37:38.027996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.696 [2024-12-13 09:37:38.028010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.697 [2024-12-13 09:37:38.028016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.697 [2024-12-13 09:37:38.028022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.697 [2024-12-13 09:37:38.028036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.697 qpair failed and we were unable to recover it. 00:26:25.697 [2024-12-13 09:37:38.037890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.697 [2024-12-13 09:37:38.037951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.697 [2024-12-13 09:37:38.037965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.697 [2024-12-13 09:37:38.037971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.697 [2024-12-13 09:37:38.037977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.697 [2024-12-13 09:37:38.037991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.697 qpair failed and we were unable to recover it. 00:26:25.697 [2024-12-13 09:37:38.047913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.697 [2024-12-13 09:37:38.047966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.697 [2024-12-13 09:37:38.047979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.697 [2024-12-13 09:37:38.047986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.697 [2024-12-13 09:37:38.047991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.697 [2024-12-13 09:37:38.048004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.697 qpair failed and we were unable to recover it. 00:26:25.697 [2024-12-13 09:37:38.057952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.697 [2024-12-13 09:37:38.058047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.697 [2024-12-13 09:37:38.058064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.697 [2024-12-13 09:37:38.058071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.697 [2024-12-13 09:37:38.058077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.697 [2024-12-13 09:37:38.058092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.697 qpair failed and we were unable to recover it. 00:26:25.957 [2024-12-13 09:37:38.067996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.957 [2024-12-13 09:37:38.068055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.957 [2024-12-13 09:37:38.068071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.958 [2024-12-13 09:37:38.068078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.958 [2024-12-13 09:37:38.068084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.958 [2024-12-13 09:37:38.068100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.958 qpair failed and we were unable to recover it. 00:26:25.958 [2024-12-13 09:37:38.078104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.958 [2024-12-13 09:37:38.078210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.958 [2024-12-13 09:37:38.078225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.958 [2024-12-13 09:37:38.078232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.958 [2024-12-13 09:37:38.078238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.958 [2024-12-13 09:37:38.078253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.958 qpair failed and we were unable to recover it. 00:26:25.958 [2024-12-13 09:37:38.088102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.958 [2024-12-13 09:37:38.088157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.958 [2024-12-13 09:37:38.088171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.958 [2024-12-13 09:37:38.088177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.958 [2024-12-13 09:37:38.088183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.958 [2024-12-13 09:37:38.088198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.958 qpair failed and we were unable to recover it. 00:26:25.958 [2024-12-13 09:37:38.098125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.958 [2024-12-13 09:37:38.098181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.958 [2024-12-13 09:37:38.098195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.958 [2024-12-13 09:37:38.098201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.958 [2024-12-13 09:37:38.098207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.958 [2024-12-13 09:37:38.098221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.958 qpair failed and we were unable to recover it. 00:26:25.958 [2024-12-13 09:37:38.108140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.958 [2024-12-13 09:37:38.108198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.958 [2024-12-13 09:37:38.108211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.958 [2024-12-13 09:37:38.108218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.958 [2024-12-13 09:37:38.108223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.958 [2024-12-13 09:37:38.108238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.958 qpair failed and we were unable to recover it. 00:26:25.958 [2024-12-13 09:37:38.118217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.958 [2024-12-13 09:37:38.118287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.958 [2024-12-13 09:37:38.118301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.958 [2024-12-13 09:37:38.118311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.958 [2024-12-13 09:37:38.118317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.958 [2024-12-13 09:37:38.118331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.958 qpair failed and we were unable to recover it. 00:26:25.958 [2024-12-13 09:37:38.128257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.958 [2024-12-13 09:37:38.128348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.958 [2024-12-13 09:37:38.128363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.958 [2024-12-13 09:37:38.128369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.958 [2024-12-13 09:37:38.128375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.958 [2024-12-13 09:37:38.128390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.958 qpair failed and we were unable to recover it. 00:26:25.958 [2024-12-13 09:37:38.138243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.958 [2024-12-13 09:37:38.138299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.958 [2024-12-13 09:37:38.138313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.958 [2024-12-13 09:37:38.138320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.958 [2024-12-13 09:37:38.138325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.958 [2024-12-13 09:37:38.138340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.958 qpair failed and we were unable to recover it. 00:26:25.958 [2024-12-13 09:37:38.148291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.958 [2024-12-13 09:37:38.148351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.958 [2024-12-13 09:37:38.148364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.958 [2024-12-13 09:37:38.148371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.958 [2024-12-13 09:37:38.148376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.958 [2024-12-13 09:37:38.148390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.958 qpair failed and we were unable to recover it. 00:26:25.958 [2024-12-13 09:37:38.158313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.958 [2024-12-13 09:37:38.158370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.958 [2024-12-13 09:37:38.158383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.958 [2024-12-13 09:37:38.158389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.958 [2024-12-13 09:37:38.158395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.958 [2024-12-13 09:37:38.158410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.958 qpair failed and we were unable to recover it. 00:26:25.958 [2024-12-13 09:37:38.168351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.958 [2024-12-13 09:37:38.168418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.958 [2024-12-13 09:37:38.168431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.958 [2024-12-13 09:37:38.168438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.958 [2024-12-13 09:37:38.168444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.958 [2024-12-13 09:37:38.168463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.958 qpair failed and we were unable to recover it. 00:26:25.958 [2024-12-13 09:37:38.178359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.958 [2024-12-13 09:37:38.178415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.958 [2024-12-13 09:37:38.178429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.958 [2024-12-13 09:37:38.178436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.958 [2024-12-13 09:37:38.178441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.958 [2024-12-13 09:37:38.178460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.958 qpair failed and we were unable to recover it. 00:26:25.958 [2024-12-13 09:37:38.188378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.958 [2024-12-13 09:37:38.188439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.958 [2024-12-13 09:37:38.188458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.959 [2024-12-13 09:37:38.188465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.959 [2024-12-13 09:37:38.188470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.959 [2024-12-13 09:37:38.188485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.959 qpair failed and we were unable to recover it. 00:26:25.959 [2024-12-13 09:37:38.198427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.959 [2024-12-13 09:37:38.198490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.959 [2024-12-13 09:37:38.198503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.959 [2024-12-13 09:37:38.198510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.959 [2024-12-13 09:37:38.198516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.959 [2024-12-13 09:37:38.198530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.959 qpair failed and we were unable to recover it. 00:26:25.959 [2024-12-13 09:37:38.208483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.959 [2024-12-13 09:37:38.208545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.959 [2024-12-13 09:37:38.208558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.959 [2024-12-13 09:37:38.208564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.959 [2024-12-13 09:37:38.208570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.959 [2024-12-13 09:37:38.208584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.959 qpair failed and we were unable to recover it. 00:26:25.959 [2024-12-13 09:37:38.218463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.959 [2024-12-13 09:37:38.218550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.959 [2024-12-13 09:37:38.218564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.959 [2024-12-13 09:37:38.218571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.959 [2024-12-13 09:37:38.218576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.959 [2024-12-13 09:37:38.218591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.959 qpair failed and we were unable to recover it. 00:26:25.959 [2024-12-13 09:37:38.228529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.959 [2024-12-13 09:37:38.228585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.959 [2024-12-13 09:37:38.228598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.959 [2024-12-13 09:37:38.228605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.959 [2024-12-13 09:37:38.228610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.959 [2024-12-13 09:37:38.228624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.959 qpair failed and we were unable to recover it. 00:26:25.959 [2024-12-13 09:37:38.238529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.959 [2024-12-13 09:37:38.238589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.959 [2024-12-13 09:37:38.238603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.959 [2024-12-13 09:37:38.238609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.959 [2024-12-13 09:37:38.238615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.959 [2024-12-13 09:37:38.238629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.959 qpair failed and we were unable to recover it. 00:26:25.959 [2024-12-13 09:37:38.248543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.959 [2024-12-13 09:37:38.248600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.959 [2024-12-13 09:37:38.248613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.959 [2024-12-13 09:37:38.248622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.959 [2024-12-13 09:37:38.248628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.959 [2024-12-13 09:37:38.248642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.959 qpair failed and we were unable to recover it. 00:26:25.959 [2024-12-13 09:37:38.258602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.959 [2024-12-13 09:37:38.258666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.959 [2024-12-13 09:37:38.258679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.959 [2024-12-13 09:37:38.258685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.959 [2024-12-13 09:37:38.258691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.959 [2024-12-13 09:37:38.258705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.959 qpair failed and we were unable to recover it. 00:26:25.959 [2024-12-13 09:37:38.268646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.959 [2024-12-13 09:37:38.268708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.959 [2024-12-13 09:37:38.268720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.959 [2024-12-13 09:37:38.268726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.959 [2024-12-13 09:37:38.268732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.959 [2024-12-13 09:37:38.268746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.959 qpair failed and we were unable to recover it. 00:26:25.959 [2024-12-13 09:37:38.278627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.959 [2024-12-13 09:37:38.278684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.959 [2024-12-13 09:37:38.278698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.959 [2024-12-13 09:37:38.278704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.959 [2024-12-13 09:37:38.278710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.959 [2024-12-13 09:37:38.278724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.959 qpair failed and we were unable to recover it. 00:26:25.959 [2024-12-13 09:37:38.288646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.959 [2024-12-13 09:37:38.288701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.959 [2024-12-13 09:37:38.288714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.959 [2024-12-13 09:37:38.288720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.959 [2024-12-13 09:37:38.288726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.959 [2024-12-13 09:37:38.288740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.959 qpair failed and we were unable to recover it. 00:26:25.959 [2024-12-13 09:37:38.298679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.959 [2024-12-13 09:37:38.298738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.959 [2024-12-13 09:37:38.298751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.959 [2024-12-13 09:37:38.298758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.959 [2024-12-13 09:37:38.298763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.959 [2024-12-13 09:37:38.298777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.959 qpair failed and we were unable to recover it. 00:26:25.959 [2024-12-13 09:37:38.308721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.959 [2024-12-13 09:37:38.308778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.959 [2024-12-13 09:37:38.308791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.959 [2024-12-13 09:37:38.308798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.960 [2024-12-13 09:37:38.308804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.960 [2024-12-13 09:37:38.308817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.960 qpair failed and we were unable to recover it. 00:26:25.960 [2024-12-13 09:37:38.318754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.960 [2024-12-13 09:37:38.318813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.960 [2024-12-13 09:37:38.318828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.960 [2024-12-13 09:37:38.318835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.960 [2024-12-13 09:37:38.318840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:25.960 [2024-12-13 09:37:38.318856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.960 qpair failed and we were unable to recover it. 00:26:26.220 [2024-12-13 09:37:38.328760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.220 [2024-12-13 09:37:38.328821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.220 [2024-12-13 09:37:38.328837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.220 [2024-12-13 09:37:38.328844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.220 [2024-12-13 09:37:38.328850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.220 [2024-12-13 09:37:38.328867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.220 qpair failed and we were unable to recover it. 00:26:26.220 [2024-12-13 09:37:38.338796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.220 [2024-12-13 09:37:38.338853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.220 [2024-12-13 09:37:38.338868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.220 [2024-12-13 09:37:38.338875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.220 [2024-12-13 09:37:38.338880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.220 [2024-12-13 09:37:38.338896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.220 qpair failed and we were unable to recover it. 00:26:26.220 [2024-12-13 09:37:38.348838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.220 [2024-12-13 09:37:38.348899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.220 [2024-12-13 09:37:38.348914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.220 [2024-12-13 09:37:38.348920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.220 [2024-12-13 09:37:38.348926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.220 [2024-12-13 09:37:38.348940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.220 qpair failed and we were unable to recover it. 00:26:26.220 [2024-12-13 09:37:38.358892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.220 [2024-12-13 09:37:38.358949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.220 [2024-12-13 09:37:38.358964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.220 [2024-12-13 09:37:38.358970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.220 [2024-12-13 09:37:38.358976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.220 [2024-12-13 09:37:38.358991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.220 qpair failed and we were unable to recover it. 00:26:26.220 [2024-12-13 09:37:38.368878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.220 [2024-12-13 09:37:38.368933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.220 [2024-12-13 09:37:38.368946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.220 [2024-12-13 09:37:38.368953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.221 [2024-12-13 09:37:38.368959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.221 [2024-12-13 09:37:38.368973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.221 qpair failed and we were unable to recover it. 00:26:26.221 [2024-12-13 09:37:38.378928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.221 [2024-12-13 09:37:38.379011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.221 [2024-12-13 09:37:38.379025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.221 [2024-12-13 09:37:38.379035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.221 [2024-12-13 09:37:38.379041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.221 [2024-12-13 09:37:38.379055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.221 qpair failed and we were unable to recover it. 00:26:26.221 [2024-12-13 09:37:38.388883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.221 [2024-12-13 09:37:38.388940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.221 [2024-12-13 09:37:38.388953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.221 [2024-12-13 09:37:38.388959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.221 [2024-12-13 09:37:38.388967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.221 [2024-12-13 09:37:38.388981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.221 qpair failed and we were unable to recover it. 00:26:26.221 [2024-12-13 09:37:38.398967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.221 [2024-12-13 09:37:38.399022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.221 [2024-12-13 09:37:38.399035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.221 [2024-12-13 09:37:38.399042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.221 [2024-12-13 09:37:38.399047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.221 [2024-12-13 09:37:38.399061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.221 qpair failed and we were unable to recover it. 00:26:26.221 [2024-12-13 09:37:38.408994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.221 [2024-12-13 09:37:38.409051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.221 [2024-12-13 09:37:38.409064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.221 [2024-12-13 09:37:38.409071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.221 [2024-12-13 09:37:38.409077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.221 [2024-12-13 09:37:38.409091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.221 qpair failed and we were unable to recover it. 00:26:26.221 [2024-12-13 09:37:38.419018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.221 [2024-12-13 09:37:38.419072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.221 [2024-12-13 09:37:38.419085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.221 [2024-12-13 09:37:38.419091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.221 [2024-12-13 09:37:38.419097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.221 [2024-12-13 09:37:38.419110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.221 qpair failed and we were unable to recover it. 00:26:26.221 [2024-12-13 09:37:38.429052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.221 [2024-12-13 09:37:38.429109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.221 [2024-12-13 09:37:38.429122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.221 [2024-12-13 09:37:38.429129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.221 [2024-12-13 09:37:38.429134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.221 [2024-12-13 09:37:38.429148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.221 qpair failed and we were unable to recover it. 00:26:26.221 [2024-12-13 09:37:38.439088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.221 [2024-12-13 09:37:38.439143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.221 [2024-12-13 09:37:38.439157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.221 [2024-12-13 09:37:38.439164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.221 [2024-12-13 09:37:38.439170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.221 [2024-12-13 09:37:38.439185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.221 qpair failed and we were unable to recover it. 00:26:26.221 [2024-12-13 09:37:38.449110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.221 [2024-12-13 09:37:38.449160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.221 [2024-12-13 09:37:38.449173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.221 [2024-12-13 09:37:38.449179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.221 [2024-12-13 09:37:38.449185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.221 [2024-12-13 09:37:38.449199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.221 qpair failed and we were unable to recover it. 00:26:26.221 [2024-12-13 09:37:38.459179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.221 [2024-12-13 09:37:38.459229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.221 [2024-12-13 09:37:38.459243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.221 [2024-12-13 09:37:38.459249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.221 [2024-12-13 09:37:38.459255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.221 [2024-12-13 09:37:38.459269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.221 qpair failed and we were unable to recover it. 00:26:26.221 [2024-12-13 09:37:38.469177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.221 [2024-12-13 09:37:38.469246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.221 [2024-12-13 09:37:38.469259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.221 [2024-12-13 09:37:38.469265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.221 [2024-12-13 09:37:38.469271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.221 [2024-12-13 09:37:38.469285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.221 qpair failed and we were unable to recover it. 00:26:26.221 [2024-12-13 09:37:38.479192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.221 [2024-12-13 09:37:38.479249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.221 [2024-12-13 09:37:38.479263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.221 [2024-12-13 09:37:38.479270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.221 [2024-12-13 09:37:38.479276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.221 [2024-12-13 09:37:38.479289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.221 qpair failed and we were unable to recover it. 00:26:26.221 [2024-12-13 09:37:38.489219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.221 [2024-12-13 09:37:38.489276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.221 [2024-12-13 09:37:38.489290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.221 [2024-12-13 09:37:38.489296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.221 [2024-12-13 09:37:38.489302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.221 [2024-12-13 09:37:38.489315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.221 qpair failed and we were unable to recover it. 00:26:26.221 [2024-12-13 09:37:38.499250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.221 [2024-12-13 09:37:38.499303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.221 [2024-12-13 09:37:38.499316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.221 [2024-12-13 09:37:38.499323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.221 [2024-12-13 09:37:38.499328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.222 [2024-12-13 09:37:38.499341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.222 qpair failed and we were unable to recover it. 00:26:26.222 [2024-12-13 09:37:38.509287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.222 [2024-12-13 09:37:38.509342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.222 [2024-12-13 09:37:38.509355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.222 [2024-12-13 09:37:38.509365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.222 [2024-12-13 09:37:38.509370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.222 [2024-12-13 09:37:38.509385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.222 qpair failed and we were unable to recover it. 00:26:26.222 [2024-12-13 09:37:38.519301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.222 [2024-12-13 09:37:38.519354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.222 [2024-12-13 09:37:38.519367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.222 [2024-12-13 09:37:38.519374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.222 [2024-12-13 09:37:38.519379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.222 [2024-12-13 09:37:38.519393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.222 qpair failed and we were unable to recover it. 00:26:26.222 [2024-12-13 09:37:38.529325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.222 [2024-12-13 09:37:38.529381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.222 [2024-12-13 09:37:38.529394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.222 [2024-12-13 09:37:38.529401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.222 [2024-12-13 09:37:38.529407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.222 [2024-12-13 09:37:38.529421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.222 qpair failed and we were unable to recover it. 00:26:26.222 [2024-12-13 09:37:38.539358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.222 [2024-12-13 09:37:38.539410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.222 [2024-12-13 09:37:38.539423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.222 [2024-12-13 09:37:38.539430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.222 [2024-12-13 09:37:38.539436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.222 [2024-12-13 09:37:38.539454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.222 qpair failed and we were unable to recover it. 00:26:26.222 [2024-12-13 09:37:38.549396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.222 [2024-12-13 09:37:38.549455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.222 [2024-12-13 09:37:38.549468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.222 [2024-12-13 09:37:38.549475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.222 [2024-12-13 09:37:38.549480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.222 [2024-12-13 09:37:38.549495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.222 qpair failed and we were unable to recover it. 00:26:26.222 [2024-12-13 09:37:38.559394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.222 [2024-12-13 09:37:38.559456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.222 [2024-12-13 09:37:38.559470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.222 [2024-12-13 09:37:38.559477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.222 [2024-12-13 09:37:38.559482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.222 [2024-12-13 09:37:38.559496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.222 qpair failed and we were unable to recover it. 00:26:26.222 [2024-12-13 09:37:38.569453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.222 [2024-12-13 09:37:38.569512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.222 [2024-12-13 09:37:38.569525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.222 [2024-12-13 09:37:38.569532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.222 [2024-12-13 09:37:38.569538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.222 [2024-12-13 09:37:38.569553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.222 qpair failed and we were unable to recover it. 00:26:26.222 [2024-12-13 09:37:38.579458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.222 [2024-12-13 09:37:38.579516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.222 [2024-12-13 09:37:38.579530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.222 [2024-12-13 09:37:38.579536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.222 [2024-12-13 09:37:38.579541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.222 [2024-12-13 09:37:38.579555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.222 qpair failed and we were unable to recover it. 00:26:26.483 [2024-12-13 09:37:38.589496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.483 [2024-12-13 09:37:38.589553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.483 [2024-12-13 09:37:38.589571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.483 [2024-12-13 09:37:38.589578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.483 [2024-12-13 09:37:38.589584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.483 [2024-12-13 09:37:38.589600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-12-13 09:37:38.599547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.483 [2024-12-13 09:37:38.599602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.483 [2024-12-13 09:37:38.599618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.483 [2024-12-13 09:37:38.599625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.483 [2024-12-13 09:37:38.599630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.483 [2024-12-13 09:37:38.599646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-12-13 09:37:38.609568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.483 [2024-12-13 09:37:38.609623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.483 [2024-12-13 09:37:38.609638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.483 [2024-12-13 09:37:38.609645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.483 [2024-12-13 09:37:38.609651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.483 [2024-12-13 09:37:38.609666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-12-13 09:37:38.619587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.483 [2024-12-13 09:37:38.619668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.483 [2024-12-13 09:37:38.619681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.483 [2024-12-13 09:37:38.619688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.483 [2024-12-13 09:37:38.619694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.483 [2024-12-13 09:37:38.619708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-12-13 09:37:38.629675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.483 [2024-12-13 09:37:38.629734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.483 [2024-12-13 09:37:38.629747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.483 [2024-12-13 09:37:38.629753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.483 [2024-12-13 09:37:38.629759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.483 [2024-12-13 09:37:38.629773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-12-13 09:37:38.639621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.483 [2024-12-13 09:37:38.639676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.483 [2024-12-13 09:37:38.639689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.483 [2024-12-13 09:37:38.639699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.483 [2024-12-13 09:37:38.639704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.483 [2024-12-13 09:37:38.639719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-12-13 09:37:38.649689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.483 [2024-12-13 09:37:38.649740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.483 [2024-12-13 09:37:38.649754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.483 [2024-12-13 09:37:38.649761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.483 [2024-12-13 09:37:38.649767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.483 [2024-12-13 09:37:38.649781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-12-13 09:37:38.659710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.483 [2024-12-13 09:37:38.659762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.483 [2024-12-13 09:37:38.659776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.483 [2024-12-13 09:37:38.659782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.483 [2024-12-13 09:37:38.659788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.483 [2024-12-13 09:37:38.659801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-12-13 09:37:38.669751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.483 [2024-12-13 09:37:38.669809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.483 [2024-12-13 09:37:38.669824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.483 [2024-12-13 09:37:38.669830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.483 [2024-12-13 09:37:38.669837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.483 [2024-12-13 09:37:38.669851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-12-13 09:37:38.679764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.484 [2024-12-13 09:37:38.679822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.484 [2024-12-13 09:37:38.679836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.484 [2024-12-13 09:37:38.679842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.484 [2024-12-13 09:37:38.679848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.484 [2024-12-13 09:37:38.679862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-12-13 09:37:38.689789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.484 [2024-12-13 09:37:38.689844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.484 [2024-12-13 09:37:38.689858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.484 [2024-12-13 09:37:38.689864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.484 [2024-12-13 09:37:38.689870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.484 [2024-12-13 09:37:38.689885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-12-13 09:37:38.699797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.484 [2024-12-13 09:37:38.699854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.484 [2024-12-13 09:37:38.699867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.484 [2024-12-13 09:37:38.699874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.484 [2024-12-13 09:37:38.699879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.484 [2024-12-13 09:37:38.699893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-12-13 09:37:38.709859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.484 [2024-12-13 09:37:38.709917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.484 [2024-12-13 09:37:38.709931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.484 [2024-12-13 09:37:38.709937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.484 [2024-12-13 09:37:38.709943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.484 [2024-12-13 09:37:38.709957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-12-13 09:37:38.719899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.484 [2024-12-13 09:37:38.719990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.484 [2024-12-13 09:37:38.720005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.484 [2024-12-13 09:37:38.720011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.484 [2024-12-13 09:37:38.720017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.484 [2024-12-13 09:37:38.720030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-12-13 09:37:38.729899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.484 [2024-12-13 09:37:38.729951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.484 [2024-12-13 09:37:38.729964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.484 [2024-12-13 09:37:38.729971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.484 [2024-12-13 09:37:38.729976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.484 [2024-12-13 09:37:38.729990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-12-13 09:37:38.739873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.484 [2024-12-13 09:37:38.739930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.484 [2024-12-13 09:37:38.739942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.484 [2024-12-13 09:37:38.739949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.484 [2024-12-13 09:37:38.739955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.484 [2024-12-13 09:37:38.739968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-12-13 09:37:38.749968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.484 [2024-12-13 09:37:38.750023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.484 [2024-12-13 09:37:38.750036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.484 [2024-12-13 09:37:38.750042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.484 [2024-12-13 09:37:38.750048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.484 [2024-12-13 09:37:38.750061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-12-13 09:37:38.759993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.484 [2024-12-13 09:37:38.760051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.484 [2024-12-13 09:37:38.760064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.484 [2024-12-13 09:37:38.760071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.484 [2024-12-13 09:37:38.760076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.484 [2024-12-13 09:37:38.760090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-12-13 09:37:38.770025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.484 [2024-12-13 09:37:38.770077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.484 [2024-12-13 09:37:38.770091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.484 [2024-12-13 09:37:38.770100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.484 [2024-12-13 09:37:38.770106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.484 [2024-12-13 09:37:38.770120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-12-13 09:37:38.780053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.484 [2024-12-13 09:37:38.780133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.484 [2024-12-13 09:37:38.780147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.484 [2024-12-13 09:37:38.780153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.484 [2024-12-13 09:37:38.780159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.484 [2024-12-13 09:37:38.780173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-12-13 09:37:38.790081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.484 [2024-12-13 09:37:38.790140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.484 [2024-12-13 09:37:38.790153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.484 [2024-12-13 09:37:38.790160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.484 [2024-12-13 09:37:38.790165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.484 [2024-12-13 09:37:38.790179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-12-13 09:37:38.800106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.485 [2024-12-13 09:37:38.800162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.485 [2024-12-13 09:37:38.800175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.485 [2024-12-13 09:37:38.800182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.485 [2024-12-13 09:37:38.800187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.485 [2024-12-13 09:37:38.800202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-12-13 09:37:38.810126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.485 [2024-12-13 09:37:38.810179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.485 [2024-12-13 09:37:38.810193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.485 [2024-12-13 09:37:38.810199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.485 [2024-12-13 09:37:38.810205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.485 [2024-12-13 09:37:38.810219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-12-13 09:37:38.820200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.485 [2024-12-13 09:37:38.820256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.485 [2024-12-13 09:37:38.820269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.485 [2024-12-13 09:37:38.820275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.485 [2024-12-13 09:37:38.820281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.485 [2024-12-13 09:37:38.820295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-12-13 09:37:38.830201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.485 [2024-12-13 09:37:38.830258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.485 [2024-12-13 09:37:38.830271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.485 [2024-12-13 09:37:38.830278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.485 [2024-12-13 09:37:38.830284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.485 [2024-12-13 09:37:38.830298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-12-13 09:37:38.840218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.485 [2024-12-13 09:37:38.840272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.485 [2024-12-13 09:37:38.840285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.485 [2024-12-13 09:37:38.840292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.485 [2024-12-13 09:37:38.840297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.485 [2024-12-13 09:37:38.840311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.745 [2024-12-13 09:37:38.850254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.745 [2024-12-13 09:37:38.850311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.745 [2024-12-13 09:37:38.850328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.745 [2024-12-13 09:37:38.850334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.745 [2024-12-13 09:37:38.850340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.745 [2024-12-13 09:37:38.850356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.745 qpair failed and we were unable to recover it. 00:26:26.745 [2024-12-13 09:37:38.860271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.745 [2024-12-13 09:37:38.860332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.745 [2024-12-13 09:37:38.860349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.745 [2024-12-13 09:37:38.860356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.745 [2024-12-13 09:37:38.860362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.745 [2024-12-13 09:37:38.860378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.745 qpair failed and we were unable to recover it. 00:26:26.745 [2024-12-13 09:37:38.870313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.745 [2024-12-13 09:37:38.870373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.745 [2024-12-13 09:37:38.870387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.745 [2024-12-13 09:37:38.870394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.745 [2024-12-13 09:37:38.870400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.745 [2024-12-13 09:37:38.870415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.745 qpair failed and we were unable to recover it. 00:26:26.745 [2024-12-13 09:37:38.880300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.745 [2024-12-13 09:37:38.880353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.745 [2024-12-13 09:37:38.880367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.745 [2024-12-13 09:37:38.880374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.745 [2024-12-13 09:37:38.880379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.745 [2024-12-13 09:37:38.880394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.745 qpair failed and we were unable to recover it. 00:26:26.745 [2024-12-13 09:37:38.890355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.745 [2024-12-13 09:37:38.890440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.745 [2024-12-13 09:37:38.890460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.745 [2024-12-13 09:37:38.890467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.745 [2024-12-13 09:37:38.890473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.745 [2024-12-13 09:37:38.890489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.745 qpair failed and we were unable to recover it. 00:26:26.745 [2024-12-13 09:37:38.900423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.745 [2024-12-13 09:37:38.900482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.745 [2024-12-13 09:37:38.900495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.745 [2024-12-13 09:37:38.900505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.745 [2024-12-13 09:37:38.900510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.745 [2024-12-13 09:37:38.900525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.745 qpair failed and we were unable to recover it. 00:26:26.745 [2024-12-13 09:37:38.910418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.746 [2024-12-13 09:37:38.910481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.746 [2024-12-13 09:37:38.910495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.746 [2024-12-13 09:37:38.910503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.746 [2024-12-13 09:37:38.910509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb121a0 00:26:26.746 [2024-12-13 09:37:38.910524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.746 qpair failed and we were unable to recover it. 00:26:26.746 [2024-12-13 09:37:38.920477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.746 [2024-12-13 09:37:38.920539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.746 [2024-12-13 09:37:38.920560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.746 [2024-12-13 09:37:38.920568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.746 [2024-12-13 09:37:38.920575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd56c000b90 00:26:26.746 [2024-12-13 09:37:38.920594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.746 qpair failed and we were unable to recover it. 00:26:26.746 [2024-12-13 09:37:38.930481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.746 [2024-12-13 09:37:38.930534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.746 [2024-12-13 09:37:38.930548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.746 [2024-12-13 09:37:38.930555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.746 [2024-12-13 09:37:38.930561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd56c000b90 00:26:26.746 [2024-12-13 09:37:38.930577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.746 qpair failed and we were unable to recover it. 00:26:26.746 [2024-12-13 09:37:38.930668] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:26:26.746 A controller has encountered a failure and is being reset. 00:26:26.746 [2024-12-13 09:37:38.940446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.746 [2024-12-13 09:37:38.940513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.746 [2024-12-13 09:37:38.940541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.746 [2024-12-13 09:37:38.940554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.746 [2024-12-13 09:37:38.940572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd568000b90 00:26:26.746 [2024-12-13 09:37:38.940600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:26.746 qpair failed and we were unable to recover it. 00:26:26.746 [2024-12-13 09:37:38.950533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.746 [2024-12-13 09:37:38.950593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.746 [2024-12-13 09:37:38.950608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.746 [2024-12-13 09:37:38.950615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.746 [2024-12-13 09:37:38.950622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd568000b90 00:26:26.746 [2024-12-13 09:37:38.950638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:26.746 qpair failed and we were unable to recover it. 00:26:26.746 Controller properly reset. 00:26:26.746 Initializing NVMe Controllers 00:26:26.746 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:26.746 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:26.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:26.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:26.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:26.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:26.746 Initialization complete. Launching workers. 00:26:26.746 Starting thread on core 1 00:26:26.746 Starting thread on core 2 00:26:26.746 Starting thread on core 3 00:26:26.746 Starting thread on core 0 00:26:26.746 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:26.746 00:26:26.746 real 0m11.524s 00:26:26.746 user 0m21.593s 00:26:26.746 sys 0m4.706s 00:26:26.746 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:26.746 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.746 ************************************ 00:26:26.746 END TEST nvmf_target_disconnect_tc2 00:26:26.746 ************************************ 00:26:27.005 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:27.005 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:27.005 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:27.005 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:27.005 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:26:27.005 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:27.005 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:26:27.005 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:27.006 rmmod nvme_tcp 00:26:27.006 rmmod nvme_fabrics 00:26:27.006 rmmod nvme_keyring 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3481284 ']' 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3481284 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3481284 ']' 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3481284 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3481284 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3481284' 00:26:27.006 killing process with pid 3481284 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3481284 00:26:27.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3481284 00:26:27.265 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:27.265 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:27.265 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:27.265 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:26:27.265 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:27.265 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:26:27.265 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:26:27.265 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:27.265 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:27.265 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.265 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.265 09:37:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.170 09:37:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:29.170 00:26:29.170 real 0m19.964s 00:26:29.170 user 0m49.737s 00:26:29.170 sys 0m9.396s 00:26:29.170 09:37:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:29.170 09:37:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:29.170 ************************************ 00:26:29.170 END TEST nvmf_target_disconnect 00:26:29.170 ************************************ 00:26:29.429 09:37:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:29.429 00:26:29.429 real 5m40.991s 00:26:29.429 user 10m26.201s 00:26:29.429 sys 1m51.045s 00:26:29.429 09:37:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:29.429 09:37:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.429 ************************************ 00:26:29.429 END TEST nvmf_host 00:26:29.429 ************************************ 00:26:29.429 09:37:41 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:29.429 09:37:41 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:26:29.429 09:37:41 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:29.429 09:37:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:29.429 09:37:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:29.429 09:37:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:29.429 ************************************ 00:26:29.429 START TEST nvmf_target_core_interrupt_mode 00:26:29.429 ************************************ 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:29.429 * Looking for test storage... 00:26:29.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:29.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.429 --rc genhtml_branch_coverage=1 00:26:29.429 --rc genhtml_function_coverage=1 00:26:29.429 --rc genhtml_legend=1 00:26:29.429 --rc geninfo_all_blocks=1 00:26:29.429 --rc geninfo_unexecuted_blocks=1 00:26:29.429 00:26:29.429 ' 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:29.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.429 --rc genhtml_branch_coverage=1 00:26:29.429 --rc genhtml_function_coverage=1 00:26:29.429 --rc genhtml_legend=1 00:26:29.429 --rc geninfo_all_blocks=1 00:26:29.429 --rc geninfo_unexecuted_blocks=1 00:26:29.429 00:26:29.429 ' 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:29.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.429 --rc genhtml_branch_coverage=1 00:26:29.429 --rc genhtml_function_coverage=1 00:26:29.429 --rc genhtml_legend=1 00:26:29.429 --rc geninfo_all_blocks=1 00:26:29.429 --rc geninfo_unexecuted_blocks=1 00:26:29.429 00:26:29.429 ' 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:29.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.429 --rc genhtml_branch_coverage=1 00:26:29.429 --rc genhtml_function_coverage=1 00:26:29.429 --rc genhtml_legend=1 00:26:29.429 --rc geninfo_all_blocks=1 00:26:29.429 --rc geninfo_unexecuted_blocks=1 00:26:29.429 00:26:29.429 ' 00:26:29.429 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:29.690 ************************************ 00:26:29.690 START TEST nvmf_abort 00:26:29.690 ************************************ 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:29.690 * Looking for test storage... 00:26:29.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:26:29.690 09:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.690 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:29.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.691 --rc genhtml_branch_coverage=1 00:26:29.691 --rc genhtml_function_coverage=1 00:26:29.691 --rc genhtml_legend=1 00:26:29.691 --rc geninfo_all_blocks=1 00:26:29.691 --rc geninfo_unexecuted_blocks=1 00:26:29.691 00:26:29.691 ' 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:29.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.691 --rc genhtml_branch_coverage=1 00:26:29.691 --rc genhtml_function_coverage=1 00:26:29.691 --rc genhtml_legend=1 00:26:29.691 --rc geninfo_all_blocks=1 00:26:29.691 --rc geninfo_unexecuted_blocks=1 00:26:29.691 00:26:29.691 ' 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:29.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.691 --rc genhtml_branch_coverage=1 00:26:29.691 --rc genhtml_function_coverage=1 00:26:29.691 --rc genhtml_legend=1 00:26:29.691 --rc geninfo_all_blocks=1 00:26:29.691 --rc geninfo_unexecuted_blocks=1 00:26:29.691 00:26:29.691 ' 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:29.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.691 --rc genhtml_branch_coverage=1 00:26:29.691 --rc genhtml_function_coverage=1 00:26:29.691 --rc genhtml_legend=1 00:26:29.691 --rc geninfo_all_blocks=1 00:26:29.691 --rc geninfo_unexecuted_blocks=1 00:26:29.691 00:26:29.691 ' 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:29.691 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:30.061 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:30.061 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.061 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.061 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.061 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:30.061 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:30.061 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:26:30.061 09:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:35.335 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:35.335 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:35.335 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:35.336 Found net devices under 0000:af:00.0: cvl_0_0 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:35.336 Found net devices under 0000:af:00.1: cvl_0_1 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:35.336 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:35.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:35.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:26:35.595 00:26:35.595 --- 10.0.0.2 ping statistics --- 00:26:35.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.595 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:35.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:35.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:26:35.595 00:26:35.595 --- 10.0.0.1 ping statistics --- 00:26:35.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.595 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3485836 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3485836 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3485836 ']' 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.595 09:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.595 [2024-12-13 09:37:47.872293] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:35.595 [2024-12-13 09:37:47.873154] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:26:35.595 [2024-12-13 09:37:47.873186] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.595 [2024-12-13 09:37:47.939695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:35.854 [2024-12-13 09:37:47.981611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.854 [2024-12-13 09:37:47.981647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.854 [2024-12-13 09:37:47.981655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.854 [2024-12-13 09:37:47.981661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.854 [2024-12-13 09:37:47.981665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.854 [2024-12-13 09:37:47.982936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.854 [2024-12-13 09:37:47.983023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.854 [2024-12-13 09:37:47.983024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.854 [2024-12-13 09:37:48.050498] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:35.854 [2024-12-13 09:37:48.050524] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:35.854 [2024-12-13 09:37:48.050901] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:35.854 [2024-12-13 09:37:48.050930] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:35.854 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.854 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:26:35.854 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:35.854 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:35.854 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.854 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.854 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:35.854 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.854 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.854 [2024-12-13 09:37:48.115852] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.854 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.854 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:35.854 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.854 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.854 Malloc0 00:26:35.854 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.854 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.855 Delay0 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.855 [2024-12-13 09:37:48.195671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.855 09:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:36.113 [2024-12-13 09:37:48.311185] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:38.650 Initializing NVMe Controllers 00:26:38.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:38.650 controller IO queue size 128 less than required 00:26:38.650 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:38.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:38.650 Initialization complete. Launching workers. 00:26:38.650 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 38039 00:26:38.650 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38100, failed to submit 66 00:26:38.650 success 38039, unsuccessful 61, failed 0 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:38.650 rmmod nvme_tcp 00:26:38.650 rmmod nvme_fabrics 00:26:38.650 rmmod nvme_keyring 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3485836 ']' 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3485836 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3485836 ']' 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3485836 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3485836 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3485836' 00:26:38.650 killing process with pid 3485836 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3485836 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3485836 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.650 09:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.556 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:40.556 00:26:40.556 real 0m10.939s 00:26:40.556 user 0m10.441s 00:26:40.556 sys 0m5.488s 00:26:40.556 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:40.556 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:40.556 ************************************ 00:26:40.556 END TEST nvmf_abort 00:26:40.556 ************************************ 00:26:40.556 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:40.556 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:40.556 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:40.556 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:40.556 ************************************ 00:26:40.556 START TEST nvmf_ns_hotplug_stress 00:26:40.556 ************************************ 00:26:40.556 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:40.815 * Looking for test storage... 00:26:40.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:40.815 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:40.815 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:26:40.815 09:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:40.815 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:40.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.816 --rc genhtml_branch_coverage=1 00:26:40.816 --rc genhtml_function_coverage=1 00:26:40.816 --rc genhtml_legend=1 00:26:40.816 --rc geninfo_all_blocks=1 00:26:40.816 --rc geninfo_unexecuted_blocks=1 00:26:40.816 00:26:40.816 ' 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:40.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.816 --rc genhtml_branch_coverage=1 00:26:40.816 --rc genhtml_function_coverage=1 00:26:40.816 --rc genhtml_legend=1 00:26:40.816 --rc geninfo_all_blocks=1 00:26:40.816 --rc geninfo_unexecuted_blocks=1 00:26:40.816 00:26:40.816 ' 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:40.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.816 --rc genhtml_branch_coverage=1 00:26:40.816 --rc genhtml_function_coverage=1 00:26:40.816 --rc genhtml_legend=1 00:26:40.816 --rc geninfo_all_blocks=1 00:26:40.816 --rc geninfo_unexecuted_blocks=1 00:26:40.816 00:26:40.816 ' 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:40.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.816 --rc genhtml_branch_coverage=1 00:26:40.816 --rc genhtml_function_coverage=1 00:26:40.816 --rc genhtml_legend=1 00:26:40.816 --rc geninfo_all_blocks=1 00:26:40.816 --rc geninfo_unexecuted_blocks=1 00:26:40.816 00:26:40.816 ' 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.816 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:26:40.817 09:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:46.086 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:46.086 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:26:46.086 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:46.086 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:46.086 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:46.086 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:46.087 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:46.087 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:46.087 Found net devices under 0000:af:00.0: cvl_0_0 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:46.087 Found net devices under 0000:af:00.1: cvl_0_1 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:46.087 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:46.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:46.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:26:46.088 00:26:46.088 --- 10.0.0.2 ping statistics --- 00:26:46.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.088 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:46.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:46.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:26:46.088 00:26:46.088 --- 10.0.0.1 ping statistics --- 00:26:46.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.088 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3489688 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3489688 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3489688 ']' 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:46.088 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:46.088 [2024-12-13 09:37:58.339863] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:46.088 [2024-12-13 09:37:58.340788] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:26:46.088 [2024-12-13 09:37:58.340824] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.088 [2024-12-13 09:37:58.408722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:46.088 [2024-12-13 09:37:58.448637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.088 [2024-12-13 09:37:58.448671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.088 [2024-12-13 09:37:58.448681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:46.088 [2024-12-13 09:37:58.448687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:46.088 [2024-12-13 09:37:58.448692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.088 [2024-12-13 09:37:58.449977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.088 [2024-12-13 09:37:58.450072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:46.088 [2024-12-13 09:37:58.450074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.346 [2024-12-13 09:37:58.517515] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:46.346 [2024-12-13 09:37:58.517526] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:46.346 [2024-12-13 09:37:58.517804] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:46.347 [2024-12-13 09:37:58.517857] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:46.347 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.347 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:26:46.347 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:46.347 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:46.347 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:46.347 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.347 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:26:46.347 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:46.605 [2024-12-13 09:37:58.746735] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.605 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:46.605 09:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:46.863 [2024-12-13 09:37:59.139102] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.863 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:47.122 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:26:47.381 Malloc0 00:26:47.381 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:47.381 Delay0 00:26:47.381 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:47.639 09:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:26:47.898 NULL1 00:26:47.898 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:48.157 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3489947 00:26:48.157 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:26:48.157 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:48.157 09:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:26:49.093 Read completed with error (sct=0, sc=11) 00:26:49.093 09:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:49.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:49.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:49.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:49.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:49.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:49.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:49.352 09:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:26:49.352 09:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:26:49.611 true 00:26:49.611 09:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:26:49.611 09:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:50.360 09:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:50.618 09:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:26:50.618 09:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:26:50.876 true 00:26:50.876 09:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:26:50.876 09:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:50.876 09:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:51.135 09:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:26:51.135 09:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:26:51.393 true 00:26:51.393 09:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:26:51.393 09:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:52.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.329 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:52.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:52.588 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:26:52.588 09:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:26:52.846 true 00:26:52.846 09:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:26:52.846 09:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:53.780 09:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:53.780 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:26:53.780 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:26:54.039 true 00:26:54.039 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:26:54.039 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:54.298 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:54.556 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:26:54.556 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:26:54.556 true 00:26:54.815 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:26:54.815 09:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:55.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.750 09:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:55.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:55.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:56.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:56.009 09:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:26:56.009 09:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:26:56.009 true 00:26:56.267 09:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:26:56.267 09:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:56.834 09:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:57.093 09:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:26:57.093 09:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:26:57.351 true 00:26:57.351 09:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:26:57.351 09:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:57.610 09:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:57.610 09:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:26:57.610 09:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:26:57.868 true 00:26:57.869 09:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:26:57.869 09:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:59.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:59.245 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:59.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:59.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:59.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:59.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:59.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:59.245 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:26:59.245 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:26:59.504 true 00:26:59.504 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:26:59.504 09:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:00.440 09:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:00.440 09:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:00.440 09:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:00.699 true 00:27:00.699 09:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:00.699 09:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:00.957 09:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:00.957 09:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:00.957 09:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:01.216 true 00:27:01.216 09:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:01.216 09:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:02.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:02.592 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:02.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:02.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:02.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:02.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:02.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:02.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:02.592 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:02.592 09:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:02.855 true 00:27:02.855 09:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:02.855 09:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:03.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:03.792 09:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:03.792 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:03.792 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:04.053 true 00:27:04.053 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:04.053 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:04.316 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:04.316 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:04.316 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:04.574 true 00:27:04.574 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:04.574 09:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:05.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:05.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:05.951 09:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:05.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:05.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:05.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:05.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:05.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:05.951 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:05.951 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:06.210 true 00:27:06.210 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:06.210 09:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:07.146 09:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:07.146 09:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:07.146 09:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:07.404 true 00:27:07.404 09:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:07.404 09:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:07.404 09:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:07.662 09:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:07.662 09:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:07.920 true 00:27:07.920 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:07.920 09:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:08.855 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:08.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:09.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:09.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:09.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:09.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:09.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:09.114 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:09.114 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:09.372 true 00:27:09.372 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:09.372 09:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:10.308 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:10.308 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:10.308 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:10.566 true 00:27:10.566 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:10.566 09:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:10.825 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:11.083 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:11.083 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:11.083 true 00:27:11.083 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:11.083 09:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:12.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:12.460 09:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:12.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:12.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:12.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:12.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:12.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:12.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:12.460 09:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:12.460 09:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:12.719 true 00:27:12.719 09:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:12.719 09:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:13.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:13.652 09:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:13.652 09:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:13.652 09:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:13.911 true 00:27:13.911 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:13.911 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:14.169 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:14.428 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:14.428 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:14.428 true 00:27:14.428 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:14.428 09:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:15.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:15.803 09:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:15.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:15.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:15.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:15.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:15.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:15.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:15.804 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:15.804 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:16.062 true 00:27:16.062 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:16.062 09:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:16.997 09:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:16.997 09:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:27:16.997 09:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:27:17.255 true 00:27:17.255 09:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:17.255 09:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:17.514 09:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:17.772 09:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:27:17.773 09:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:27:17.773 true 00:27:17.773 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:17.773 09:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:19.149 Initializing NVMe Controllers 00:27:19.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:19.149 Controller IO queue size 128, less than required. 00:27:19.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:19.149 Controller IO queue size 128, less than required. 00:27:19.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:19.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:19.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:19.149 Initialization complete. Launching workers. 00:27:19.149 ======================================================== 00:27:19.149 Latency(us) 00:27:19.149 Device Information : IOPS MiB/s Average min max 00:27:19.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2202.92 1.08 42099.82 2561.45 1048921.59 00:27:19.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18747.51 9.15 6827.13 1560.87 368798.84 00:27:19.149 ======================================================== 00:27:19.149 Total : 20950.43 10.23 10536.02 1560.87 1048921.59 00:27:19.149 00:27:19.149 09:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:19.149 09:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:19.149 09:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:19.407 true 00:27:19.407 09:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3489947 00:27:19.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3489947) - No such process 00:27:19.407 09:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3489947 00:27:19.407 09:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:19.407 09:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:19.666 09:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:27:19.666 09:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:27:19.666 09:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:27:19.666 09:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:19.666 09:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:27:19.925 null0 00:27:19.925 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:19.925 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:19.925 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:27:20.183 null1 00:27:20.183 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:20.183 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:20.183 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:27:20.183 null2 00:27:20.183 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:20.183 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:20.184 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:27:20.442 null3 00:27:20.442 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:20.442 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:20.442 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:27:20.700 null4 00:27:20.700 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:20.700 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:20.700 09:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:27:20.700 null5 00:27:20.700 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:20.700 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:20.700 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:27:20.959 null6 00:27:20.959 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:20.959 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:20.959 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:27:21.218 null7 00:27:21.218 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:21.218 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:21.218 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:27:21.218 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:21.218 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:21.218 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:27:21.218 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3495353 3495354 3495356 3495359 3495360 3495362 3495364 3495366 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.219 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:21.479 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.738 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.738 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:21.738 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.738 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.738 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:21.738 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.738 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.738 09:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:21.738 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:21.738 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:21.738 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:21.738 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:21.738 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:21.738 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:21.738 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:21.738 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:21.997 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:22.256 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:22.256 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:22.256 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:22.256 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:22.256 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:22.256 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:22.256 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:22.256 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:22.515 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:22.516 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:22.516 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:22.516 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:22.774 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:22.774 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:22.774 09:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:22.774 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.774 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.774 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:22.774 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.774 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.774 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:22.774 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.774 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.774 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:22.774 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.774 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.775 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:22.775 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.775 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.775 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:22.775 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.775 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.775 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:22.775 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.775 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.775 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:22.775 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:22.775 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:22.775 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:23.033 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:23.033 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:23.033 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:23.033 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:23.033 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:23.033 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:23.033 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:23.033 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.292 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.551 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:23.809 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.810 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.810 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:23.810 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.810 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.810 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:23.810 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:23.810 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:23.810 09:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:23.810 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:23.810 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:23.810 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:23.810 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:23.810 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:23.810 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:23.810 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:23.810 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.069 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:24.328 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:24.328 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:24.328 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:24.328 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:24.328 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:24.328 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:24.328 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:24.328 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:24.587 09:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:24.845 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:25.104 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:25.105 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:25.105 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:25.105 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:25.105 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:25.105 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:25.105 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:25.105 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:25.364 rmmod nvme_tcp 00:27:25.364 rmmod nvme_fabrics 00:27:25.364 rmmod nvme_keyring 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3489688 ']' 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3489688 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3489688 ']' 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3489688 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3489688 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3489688' 00:27:25.364 killing process with pid 3489688 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3489688 00:27:25.364 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3489688 00:27:25.622 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:25.622 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:25.622 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:25.622 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:27:25.622 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:27:25.622 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:25.622 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:27:25.622 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:25.622 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:25.622 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.622 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.622 09:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.155 09:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:28.155 00:27:28.155 real 0m47.060s 00:27:28.155 user 2m58.612s 00:27:28.155 sys 0m19.198s 00:27:28.155 09:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:28.155 09:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:28.155 ************************************ 00:27:28.155 END TEST nvmf_ns_hotplug_stress 00:27:28.155 ************************************ 00:27:28.155 09:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:28.155 09:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:28.155 09:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:28.155 09:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:28.155 ************************************ 00:27:28.155 START TEST nvmf_delete_subsystem 00:27:28.155 ************************************ 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:28.155 * Looking for test storage... 00:27:28.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:28.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.155 --rc genhtml_branch_coverage=1 00:27:28.155 --rc genhtml_function_coverage=1 00:27:28.155 --rc genhtml_legend=1 00:27:28.155 --rc geninfo_all_blocks=1 00:27:28.155 --rc geninfo_unexecuted_blocks=1 00:27:28.155 00:27:28.155 ' 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:28.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.155 --rc genhtml_branch_coverage=1 00:27:28.155 --rc genhtml_function_coverage=1 00:27:28.155 --rc genhtml_legend=1 00:27:28.155 --rc geninfo_all_blocks=1 00:27:28.155 --rc geninfo_unexecuted_blocks=1 00:27:28.155 00:27:28.155 ' 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:28.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.155 --rc genhtml_branch_coverage=1 00:27:28.155 --rc genhtml_function_coverage=1 00:27:28.155 --rc genhtml_legend=1 00:27:28.155 --rc geninfo_all_blocks=1 00:27:28.155 --rc geninfo_unexecuted_blocks=1 00:27:28.155 00:27:28.155 ' 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:28.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.155 --rc genhtml_branch_coverage=1 00:27:28.155 --rc genhtml_function_coverage=1 00:27:28.155 --rc genhtml_legend=1 00:27:28.155 --rc geninfo_all_blocks=1 00:27:28.155 --rc geninfo_unexecuted_blocks=1 00:27:28.155 00:27:28.155 ' 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.155 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:27:28.156 09:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:33.421 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:33.421 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:33.421 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:33.422 Found net devices under 0000:af:00.0: cvl_0_0 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:33.422 Found net devices under 0000:af:00.1: cvl_0_1 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:33.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:27:33.422 00:27:33.422 --- 10.0.0.2 ping statistics --- 00:27:33.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.422 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:27:33.422 00:27:33.422 --- 10.0.0.1 ping statistics --- 00:27:33.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.422 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3499526 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3499526 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3499526 ']' 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:33.422 [2024-12-13 09:38:45.453239] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:33.422 [2024-12-13 09:38:45.454132] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:27:33.422 [2024-12-13 09:38:45.454163] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.422 [2024-12-13 09:38:45.521445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:33.422 [2024-12-13 09:38:45.562706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.422 [2024-12-13 09:38:45.562737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.422 [2024-12-13 09:38:45.562745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.422 [2024-12-13 09:38:45.562751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.422 [2024-12-13 09:38:45.562756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.422 [2024-12-13 09:38:45.563732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.422 [2024-12-13 09:38:45.563735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.422 [2024-12-13 09:38:45.631324] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:33.422 [2024-12-13 09:38:45.631532] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:33.422 [2024-12-13 09:38:45.631596] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:33.422 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:33.423 [2024-12-13 09:38:45.688341] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:33.423 [2024-12-13 09:38:45.708779] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:33.423 NULL1 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:33.423 Delay0 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3499662 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:27:33.423 09:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:33.681 [2024-12-13 09:38:45.791287] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:35.584 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:35.584 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.584 09:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 [2024-12-13 09:38:47.960696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbdcc00d6c0 is same with the state(6) to be set 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 starting I/O failed: -6 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 [2024-12-13 09:38:47.961214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf13960 is same with the state(6) to be set 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 [2024-12-13 09:38:47.961423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbdcc000c80 is same with the state(6) to be set 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Read completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.844 Write completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Write completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Write completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Write completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 Read completed with error (sct=0, sc=8) 00:27:35.845 [2024-12-13 09:38:47.961632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbdcc00d060 is same with the state(6) to be set 00:27:36.782 [2024-12-13 09:38:48.926201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf149b0 is same with the state(6) to be set 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 [2024-12-13 09:38:48.962645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf132c0 is same with the state(6) to be set 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 [2024-12-13 09:38:48.962865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf13b40 is same with the state(6) to be set 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 [2024-12-13 09:38:48.963020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf13780 is same with the state(6) to be set 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Write completed with error (sct=0, sc=8) 00:27:36.782 Read completed with error (sct=0, sc=8) 00:27:36.782 [2024-12-13 09:38:48.964237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbdcc00d390 is same with the state(6) to be set 00:27:36.782 Initializing NVMe Controllers 00:27:36.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:36.782 Controller IO queue size 128, less than required. 00:27:36.782 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:36.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:36.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:36.782 Initialization complete. Launching workers. 00:27:36.782 ======================================================== 00:27:36.782 Latency(us) 00:27:36.782 Device Information : IOPS MiB/s Average min max 00:27:36.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.57 0.10 943250.83 1313.47 1011303.28 00:27:36.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.85 0.08 867081.88 439.43 1011459.60 00:27:36.782 ======================================================== 00:27:36.782 Total : 353.42 0.17 909231.55 439.43 1011459.60 00:27:36.782 00:27:36.782 09:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.782 [2024-12-13 09:38:48.964675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf149b0 (9): Bad file descriptor 00:27:36.782 09:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:27:36.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:36.783 09:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3499662 00:27:36.783 09:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3499662 00:27:37.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3499662) - No such process 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3499662 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3499662 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3499662 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.351 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:37.352 [2024-12-13 09:38:49.484655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.352 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.352 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:37.352 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.352 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:37.352 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.352 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3500138 00:27:37.352 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:27:37.352 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3500138 00:27:37.352 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:37.352 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:37.352 [2024-12-13 09:38:49.549903] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:37.919 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:37.919 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3500138 00:27:37.919 09:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:38.177 09:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:38.177 09:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3500138 00:27:38.177 09:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:38.744 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:38.744 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3500138 00:27:38.744 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:39.311 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:39.311 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3500138 00:27:39.311 09:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:39.878 09:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:39.878 09:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3500138 00:27:39.878 09:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:40.446 09:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:40.446 09:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3500138 00:27:40.446 09:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:40.446 Initializing NVMe Controllers 00:27:40.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:40.446 Controller IO queue size 128, less than required. 00:27:40.446 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:40.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:40.446 Initialization complete. Launching workers. 00:27:40.446 ======================================================== 00:27:40.446 Latency(us) 00:27:40.446 Device Information : IOPS MiB/s Average min max 00:27:40.446 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003340.32 1000179.16 1011327.86 00:27:40.446 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004990.48 1000200.83 1010458.98 00:27:40.446 ======================================================== 00:27:40.446 Total : 256.00 0.12 1004165.40 1000179.16 1011327.86 00:27:40.446 00:27:40.704 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:40.704 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3500138 00:27:40.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3500138) - No such process 00:27:40.704 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3500138 00:27:40.704 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:40.704 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:27:40.704 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:40.704 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:27:40.704 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:40.704 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:27:40.704 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:40.704 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:40.704 rmmod nvme_tcp 00:27:40.704 rmmod nvme_fabrics 00:27:40.704 rmmod nvme_keyring 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3499526 ']' 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3499526 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3499526 ']' 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3499526 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3499526 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3499526' 00:27:40.964 killing process with pid 3499526 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3499526 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3499526 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.964 09:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:43.497 00:27:43.497 real 0m15.370s 00:27:43.497 user 0m25.857s 00:27:43.497 sys 0m5.386s 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:43.497 ************************************ 00:27:43.497 END TEST nvmf_delete_subsystem 00:27:43.497 ************************************ 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:43.497 ************************************ 00:27:43.497 START TEST nvmf_host_management 00:27:43.497 ************************************ 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:43.497 * Looking for test storage... 00:27:43.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:27:43.497 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:43.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.498 --rc genhtml_branch_coverage=1 00:27:43.498 --rc genhtml_function_coverage=1 00:27:43.498 --rc genhtml_legend=1 00:27:43.498 --rc geninfo_all_blocks=1 00:27:43.498 --rc geninfo_unexecuted_blocks=1 00:27:43.498 00:27:43.498 ' 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:43.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.498 --rc genhtml_branch_coverage=1 00:27:43.498 --rc genhtml_function_coverage=1 00:27:43.498 --rc genhtml_legend=1 00:27:43.498 --rc geninfo_all_blocks=1 00:27:43.498 --rc geninfo_unexecuted_blocks=1 00:27:43.498 00:27:43.498 ' 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:43.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.498 --rc genhtml_branch_coverage=1 00:27:43.498 --rc genhtml_function_coverage=1 00:27:43.498 --rc genhtml_legend=1 00:27:43.498 --rc geninfo_all_blocks=1 00:27:43.498 --rc geninfo_unexecuted_blocks=1 00:27:43.498 00:27:43.498 ' 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:43.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.498 --rc genhtml_branch_coverage=1 00:27:43.498 --rc genhtml_function_coverage=1 00:27:43.498 --rc genhtml_legend=1 00:27:43.498 --rc geninfo_all_blocks=1 00:27:43.498 --rc geninfo_unexecuted_blocks=1 00:27:43.498 00:27:43.498 ' 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:27:43.498 09:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:48.766 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:48.766 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:48.766 Found net devices under 0000:af:00.0: cvl_0_0 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:48.766 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:48.767 Found net devices under 0000:af:00.1: cvl_0_1 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:48.767 09:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:48.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:27:48.767 00:27:48.767 --- 10.0.0.2 ping statistics --- 00:27:48.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.767 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:48.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:27:48.767 00:27:48.767 --- 10.0.0.1 ping statistics --- 00:27:48.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.767 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3504311 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3504311 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3504311 ']' 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.767 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.025 [2024-12-13 09:39:01.132879] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:49.025 [2024-12-13 09:39:01.133797] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:27:49.025 [2024-12-13 09:39:01.133831] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.025 [2024-12-13 09:39:01.201612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:49.025 [2024-12-13 09:39:01.243016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.025 [2024-12-13 09:39:01.243051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.025 [2024-12-13 09:39:01.243058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.025 [2024-12-13 09:39:01.243063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.025 [2024-12-13 09:39:01.243068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.025 [2024-12-13 09:39:01.244381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.025 [2024-12-13 09:39:01.244473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:49.025 [2024-12-13 09:39:01.244602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.025 [2024-12-13 09:39:01.244603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:49.025 [2024-12-13 09:39:01.312119] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:49.025 [2024-12-13 09:39:01.312303] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:49.025 [2024-12-13 09:39:01.312706] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:49.025 [2024-12-13 09:39:01.312726] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:49.025 [2024-12-13 09:39:01.312876] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:49.025 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.025 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:49.025 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:49.025 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:49.025 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.025 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:49.025 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:49.025 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.025 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.025 [2024-12-13 09:39:01.377273] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.284 Malloc0 00:27:49.284 [2024-12-13 09:39:01.449230] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3504382 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3504382 /var/tmp/bdevperf.sock 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3504382 ']' 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:49.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:49.284 { 00:27:49.284 "params": { 00:27:49.284 "name": "Nvme$subsystem", 00:27:49.284 "trtype": "$TEST_TRANSPORT", 00:27:49.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:49.284 "adrfam": "ipv4", 00:27:49.284 "trsvcid": "$NVMF_PORT", 00:27:49.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:49.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:49.284 "hdgst": ${hdgst:-false}, 00:27:49.284 "ddgst": ${ddgst:-false} 00:27:49.284 }, 00:27:49.284 "method": "bdev_nvme_attach_controller" 00:27:49.284 } 00:27:49.284 EOF 00:27:49.284 )") 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:49.284 09:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:49.284 "params": { 00:27:49.284 "name": "Nvme0", 00:27:49.284 "trtype": "tcp", 00:27:49.284 "traddr": "10.0.0.2", 00:27:49.284 "adrfam": "ipv4", 00:27:49.284 "trsvcid": "4420", 00:27:49.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:49.284 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:49.284 "hdgst": false, 00:27:49.284 "ddgst": false 00:27:49.284 }, 00:27:49.284 "method": "bdev_nvme_attach_controller" 00:27:49.284 }' 00:27:49.284 [2024-12-13 09:39:01.542536] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:27:49.284 [2024-12-13 09:39:01.542583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3504382 ] 00:27:49.284 [2024-12-13 09:39:01.606846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.284 [2024-12-13 09:39:01.647177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.852 Running I/O for 10 seconds... 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.852 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:49.853 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.853 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:27:49.853 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:27:49.853 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:27:50.113 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:27:50.113 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:50.113 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:50.113 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:50.113 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.113 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:50.113 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.113 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:27:50.113 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:27:50.113 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:27:50.113 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:27:50.113 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:27:50.113 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:50.113 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.113 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:50.113 [2024-12-13 09:39:02.365352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.113 [2024-12-13 09:39:02.365394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.113 [2024-12-13 09:39:02.365410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.113 [2024-12-13 09:39:02.365418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.113 [2024-12-13 09:39:02.365427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.113 [2024-12-13 09:39:02.365439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.113 [2024-12-13 09:39:02.365454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.113 [2024-12-13 09:39:02.365461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.113 [2024-12-13 09:39:02.365469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.113 [2024-12-13 09:39:02.365476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.113 [2024-12-13 09:39:02.365484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.113 [2024-12-13 09:39:02.365490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.113 [2024-12-13 09:39:02.365498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.113 [2024-12-13 09:39:02.365504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.113 [2024-12-13 09:39:02.365512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.113 [2024-12-13 09:39:02.365519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.113 [2024-12-13 09:39:02.365526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.113 [2024-12-13 09:39:02.365533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.113 [2024-12-13 09:39:02.365541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.113 [2024-12-13 09:39:02.365547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.113 [2024-12-13 09:39:02.365555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.113 [2024-12-13 09:39:02.365561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.113 [2024-12-13 09:39:02.365569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.113 [2024-12-13 09:39:02.365575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.113 [2024-12-13 09:39:02.365583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.113 [2024-12-13 09:39:02.365590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.365992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.365999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.366006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.366012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.366020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.366026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.366034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.366041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.366048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.366055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.366062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.366069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.366077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.366083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.366091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.366097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.366105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.366111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.366119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.366126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.366134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.114 [2024-12-13 09:39:02.366142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.114 [2024-12-13 09:39:02.366149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.115 [2024-12-13 09:39:02.366156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.115 [2024-12-13 09:39:02.366164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.115 [2024-12-13 09:39:02.366172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.115 [2024-12-13 09:39:02.366179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.115 [2024-12-13 09:39:02.366186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.115 [2024-12-13 09:39:02.366193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.115 [2024-12-13 09:39:02.366199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.115 [2024-12-13 09:39:02.366207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.115 [2024-12-13 09:39:02.366214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.115 [2024-12-13 09:39:02.366221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.115 [2024-12-13 09:39:02.366228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.115 [2024-12-13 09:39:02.366235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.115 [2024-12-13 09:39:02.366241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.115 [2024-12-13 09:39:02.366249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.115 [2024-12-13 09:39:02.366255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.115 [2024-12-13 09:39:02.366263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.115 [2024-12-13 09:39:02.366269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.115 [2024-12-13 09:39:02.366277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.115 [2024-12-13 09:39:02.366283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.115 [2024-12-13 09:39:02.366291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.115 [2024-12-13 09:39:02.366297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.115 [2024-12-13 09:39:02.366305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.115 [2024-12-13 09:39:02.366311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.115 [2024-12-13 09:39:02.366319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.115 [2024-12-13 09:39:02.366325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.115 [2024-12-13 09:39:02.366353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.115 [2024-12-13 09:39:02.367280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:50.115 task offset: 98816 on job bdev=Nvme0n1 fails 00:27:50.115 00:27:50.115 Latency(us) 00:27:50.115 [2024-12-13T08:39:02.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.115 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.115 Job: Nvme0n1 ended in about 0.40 seconds with error 00:27:50.115 Verification LBA range: start 0x0 length 0x400 00:27:50.115 Nvme0n1 : 0.40 1912.97 119.56 159.41 0.00 30061.43 1349.73 26838.55 00:27:50.115 [2024-12-13T08:39:02.481Z] =================================================================================================================== 00:27:50.115 [2024-12-13T08:39:02.481Z] Total : 1912.97 119.56 159.41 0.00 30061.43 1349.73 26838.55 00:27:50.115 [2024-12-13 09:39:02.369645] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:50.115 [2024-12-13 09:39:02.369665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb967e0 (9): Bad file descriptor 00:27:50.115 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.115 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:50.115 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.115 [2024-12-13 09:39:02.370662] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:27:50.115 [2024-12-13 09:39:02.370743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:50.115 [2024-12-13 09:39:02.370765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.115 [2024-12-13 09:39:02.370780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:27:50.115 [2024-12-13 09:39:02.370787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:27:50.115 [2024-12-13 09:39:02.370794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.115 [2024-12-13 09:39:02.370800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb967e0 00:27:50.115 [2024-12-13 09:39:02.370818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb967e0 (9): Bad file descriptor 00:27:50.115 [2024-12-13 09:39:02.370829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:50.115 [2024-12-13 09:39:02.370835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:50.115 [2024-12-13 09:39:02.370843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:50.115 [2024-12-13 09:39:02.370851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:50.115 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:50.115 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.115 09:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:27:51.206 09:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3504382 00:27:51.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3504382) - No such process 00:27:51.206 09:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:27:51.206 09:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:27:51.206 09:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:51.206 09:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:27:51.206 09:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:51.206 09:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:51.206 09:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:51.206 09:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:51.206 { 00:27:51.206 "params": { 00:27:51.206 "name": "Nvme$subsystem", 00:27:51.206 "trtype": "$TEST_TRANSPORT", 00:27:51.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.206 "adrfam": "ipv4", 00:27:51.206 "trsvcid": "$NVMF_PORT", 00:27:51.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.206 "hdgst": ${hdgst:-false}, 00:27:51.206 "ddgst": ${ddgst:-false} 00:27:51.206 }, 00:27:51.206 "method": "bdev_nvme_attach_controller" 00:27:51.206 } 00:27:51.206 EOF 00:27:51.206 )") 00:27:51.206 09:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:51.206 09:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:51.206 09:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:51.206 09:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:51.206 "params": { 00:27:51.206 "name": "Nvme0", 00:27:51.206 "trtype": "tcp", 00:27:51.206 "traddr": "10.0.0.2", 00:27:51.206 "adrfam": "ipv4", 00:27:51.206 "trsvcid": "4420", 00:27:51.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:51.206 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:51.206 "hdgst": false, 00:27:51.206 "ddgst": false 00:27:51.206 }, 00:27:51.206 "method": "bdev_nvme_attach_controller" 00:27:51.206 }' 00:27:51.206 [2024-12-13 09:39:03.436082] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:27:51.206 [2024-12-13 09:39:03.436130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3504664 ] 00:27:51.206 [2024-12-13 09:39:03.500301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.206 [2024-12-13 09:39:03.540602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.491 Running I/O for 1 seconds... 00:27:52.483 1984.00 IOPS, 124.00 MiB/s 00:27:52.483 Latency(us) 00:27:52.483 [2024-12-13T08:39:04.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.483 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:52.483 Verification LBA range: start 0x0 length 0x400 00:27:52.483 Nvme0n1 : 1.00 2042.57 127.66 0.00 0.00 30844.27 6397.56 26963.38 00:27:52.483 [2024-12-13T08:39:04.849Z] =================================================================================================================== 00:27:52.483 [2024-12-13T08:39:04.849Z] Total : 2042.57 127.66 0.00 0.00 30844.27 6397.56 26963.38 00:27:52.741 09:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:27:52.741 09:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:27:52.741 09:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:52.741 09:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:52.741 09:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:27:52.741 09:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:52.742 09:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:27:52.742 09:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:52.742 09:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:27:52.742 09:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:52.742 09:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:52.742 rmmod nvme_tcp 00:27:52.742 rmmod nvme_fabrics 00:27:52.742 rmmod nvme_keyring 00:27:52.742 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:52.742 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:27:52.742 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:27:52.742 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3504311 ']' 00:27:52.742 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3504311 00:27:52.742 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3504311 ']' 00:27:52.742 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3504311 00:27:52.742 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:27:52.742 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:52.742 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3504311 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3504311' 00:27:53.000 killing process with pid 3504311 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3504311 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3504311 00:27:53.000 [2024-12-13 09:39:05.266871] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.000 09:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.534 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:55.534 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:55.534 00:27:55.534 real 0m11.921s 00:27:55.534 user 0m18.401s 00:27:55.534 sys 0m5.773s 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:55.535 ************************************ 00:27:55.535 END TEST nvmf_host_management 00:27:55.535 ************************************ 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:55.535 ************************************ 00:27:55.535 START TEST nvmf_lvol 00:27:55.535 ************************************ 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:55.535 * Looking for test storage... 00:27:55.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:55.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.535 --rc genhtml_branch_coverage=1 00:27:55.535 --rc genhtml_function_coverage=1 00:27:55.535 --rc genhtml_legend=1 00:27:55.535 --rc geninfo_all_blocks=1 00:27:55.535 --rc geninfo_unexecuted_blocks=1 00:27:55.535 00:27:55.535 ' 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:55.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.535 --rc genhtml_branch_coverage=1 00:27:55.535 --rc genhtml_function_coverage=1 00:27:55.535 --rc genhtml_legend=1 00:27:55.535 --rc geninfo_all_blocks=1 00:27:55.535 --rc geninfo_unexecuted_blocks=1 00:27:55.535 00:27:55.535 ' 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:55.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.535 --rc genhtml_branch_coverage=1 00:27:55.535 --rc genhtml_function_coverage=1 00:27:55.535 --rc genhtml_legend=1 00:27:55.535 --rc geninfo_all_blocks=1 00:27:55.535 --rc geninfo_unexecuted_blocks=1 00:27:55.535 00:27:55.535 ' 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:55.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.535 --rc genhtml_branch_coverage=1 00:27:55.535 --rc genhtml_function_coverage=1 00:27:55.535 --rc genhtml_legend=1 00:27:55.535 --rc geninfo_all_blocks=1 00:27:55.535 --rc geninfo_unexecuted_blocks=1 00:27:55.535 00:27:55.535 ' 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.535 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:27:55.536 09:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:00.809 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:00.809 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:00.809 Found net devices under 0000:af:00.0: cvl_0_0 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:00.809 Found net devices under 0000:af:00.1: cvl_0_1 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.809 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:00.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:28:00.810 00:28:00.810 --- 10.0.0.2 ping statistics --- 00:28:00.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.810 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:28:00.810 00:28:00.810 --- 10.0.0.1 ping statistics --- 00:28:00.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.810 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3508745 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3508745 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3508745 ']' 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:00.810 09:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:00.810 [2024-12-13 09:39:13.040560] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:00.810 [2024-12-13 09:39:13.041498] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:28:00.810 [2024-12-13 09:39:13.041531] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.810 [2024-12-13 09:39:13.108633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:00.810 [2024-12-13 09:39:13.150906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.810 [2024-12-13 09:39:13.150938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.810 [2024-12-13 09:39:13.150945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.810 [2024-12-13 09:39:13.150951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.810 [2024-12-13 09:39:13.150956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.810 [2024-12-13 09:39:13.152126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.810 [2024-12-13 09:39:13.152143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.810 [2024-12-13 09:39:13.152145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.070 [2024-12-13 09:39:13.220008] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:01.070 [2024-12-13 09:39:13.220324] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:01.070 [2024-12-13 09:39:13.220386] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:01.070 [2024-12-13 09:39:13.220424] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:01.070 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:01.070 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:28:01.070 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:01.070 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:01.070 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:01.070 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.070 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:01.329 [2024-12-13 09:39:13.456871] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.329 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:01.587 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:01.587 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:01.587 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:01.588 09:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:01.846 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:02.104 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bf14cdca-46b4-4cf2-be2e-414f29b106e4 00:28:02.104 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bf14cdca-46b4-4cf2-be2e-414f29b106e4 lvol 20 00:28:02.362 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=01015300-5325-47c4-b731-2ced630fde65 00:28:02.362 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:02.362 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 01015300-5325-47c4-b731-2ced630fde65 00:28:02.621 09:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:02.880 [2024-12-13 09:39:15.024779] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.880 09:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:02.880 09:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3509215 00:28:02.880 09:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:02.880 09:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:04.257 09:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 01015300-5325-47c4-b731-2ced630fde65 MY_SNAPSHOT 00:28:04.257 09:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b493b820-50f0-4b56-9ea6-70883af380f7 00:28:04.257 09:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 01015300-5325-47c4-b731-2ced630fde65 30 00:28:04.515 09:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b493b820-50f0-4b56-9ea6-70883af380f7 MY_CLONE 00:28:04.774 09:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5cb905ce-7417-4bf3-874f-f055c9df95dc 00:28:04.774 09:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5cb905ce-7417-4bf3-874f-f055c9df95dc 00:28:05.342 09:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3509215 00:28:13.463 Initializing NVMe Controllers 00:28:13.463 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:13.463 Controller IO queue size 128, less than required. 00:28:13.463 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:13.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:13.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:13.463 Initialization complete. Launching workers. 00:28:13.463 ======================================================== 00:28:13.463 Latency(us) 00:28:13.463 Device Information : IOPS MiB/s Average min max 00:28:13.463 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12047.00 47.06 10625.38 1554.97 64582.32 00:28:13.463 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12217.50 47.72 10476.51 3553.53 56255.47 00:28:13.463 ======================================================== 00:28:13.463 Total : 24264.50 94.78 10550.43 1554.97 64582.32 00:28:13.463 00:28:13.463 09:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:13.721 09:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 01015300-5325-47c4-b731-2ced630fde65 00:28:13.721 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bf14cdca-46b4-4cf2-be2e-414f29b106e4 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:13.981 rmmod nvme_tcp 00:28:13.981 rmmod nvme_fabrics 00:28:13.981 rmmod nvme_keyring 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3508745 ']' 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3508745 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3508745 ']' 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3508745 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.981 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3508745 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3508745' 00:28:14.240 killing process with pid 3508745 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3508745 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3508745 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.240 09:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:16.776 00:28:16.776 real 0m21.181s 00:28:16.776 user 0m55.520s 00:28:16.776 sys 0m9.154s 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:16.776 ************************************ 00:28:16.776 END TEST nvmf_lvol 00:28:16.776 ************************************ 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:16.776 ************************************ 00:28:16.776 START TEST nvmf_lvs_grow 00:28:16.776 ************************************ 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:16.776 * Looking for test storage... 00:28:16.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:16.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.776 --rc genhtml_branch_coverage=1 00:28:16.776 --rc genhtml_function_coverage=1 00:28:16.776 --rc genhtml_legend=1 00:28:16.776 --rc geninfo_all_blocks=1 00:28:16.776 --rc geninfo_unexecuted_blocks=1 00:28:16.776 00:28:16.776 ' 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:16.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.776 --rc genhtml_branch_coverage=1 00:28:16.776 --rc genhtml_function_coverage=1 00:28:16.776 --rc genhtml_legend=1 00:28:16.776 --rc geninfo_all_blocks=1 00:28:16.776 --rc geninfo_unexecuted_blocks=1 00:28:16.776 00:28:16.776 ' 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:16.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.776 --rc genhtml_branch_coverage=1 00:28:16.776 --rc genhtml_function_coverage=1 00:28:16.776 --rc genhtml_legend=1 00:28:16.776 --rc geninfo_all_blocks=1 00:28:16.776 --rc geninfo_unexecuted_blocks=1 00:28:16.776 00:28:16.776 ' 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:16.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.776 --rc genhtml_branch_coverage=1 00:28:16.776 --rc genhtml_function_coverage=1 00:28:16.776 --rc genhtml_legend=1 00:28:16.776 --rc geninfo_all_blocks=1 00:28:16.776 --rc geninfo_unexecuted_blocks=1 00:28:16.776 00:28:16.776 ' 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.776 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:28:16.777 09:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.050 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:22.051 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:22.051 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:22.051 Found net devices under 0000:af:00.0: cvl_0_0 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:22.051 Found net devices under 0000:af:00.1: cvl_0_1 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.051 09:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.051 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.051 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.051 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:22.051 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.051 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.051 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.051 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:22.051 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:22.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:28:22.051 00:28:22.051 --- 10.0.0.2 ping statistics --- 00:28:22.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.051 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:28:22.051 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:28:22.051 00:28:22.051 --- 10.0.0.1 ping statistics --- 00:28:22.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.051 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3514248 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3514248 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3514248 ']' 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.052 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:22.052 [2024-12-13 09:39:34.304817] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:22.052 [2024-12-13 09:39:34.305760] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:28:22.052 [2024-12-13 09:39:34.305799] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.052 [2024-12-13 09:39:34.373823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.052 [2024-12-13 09:39:34.414430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.052 [2024-12-13 09:39:34.414467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.052 [2024-12-13 09:39:34.414474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.052 [2024-12-13 09:39:34.414480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.052 [2024-12-13 09:39:34.414485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.052 [2024-12-13 09:39:34.414980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.310 [2024-12-13 09:39:34.483813] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:22.310 [2024-12-13 09:39:34.484028] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:22.310 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.310 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:28:22.310 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.310 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:22.310 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:22.310 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.310 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:22.569 [2024-12-13 09:39:34.715424] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.569 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:28:22.569 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:22.569 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:22.569 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:22.569 ************************************ 00:28:22.569 START TEST lvs_grow_clean 00:28:22.569 ************************************ 00:28:22.569 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:28:22.569 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:22.569 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:22.569 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:22.569 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:22.569 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:22.569 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:22.569 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:22.569 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:22.569 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:22.848 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:22.848 09:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:22.848 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b3e879ef-0d82-4e41-b0af-f068375ebb4d 00:28:22.848 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3e879ef-0d82-4e41-b0af-f068375ebb4d 00:28:22.848 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:23.107 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:23.107 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:23.107 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b3e879ef-0d82-4e41-b0af-f068375ebb4d lvol 150 00:28:23.366 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a9cc34ca-e3de-4736-9088-beaabe96146b 00:28:23.366 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:23.366 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:23.625 [2024-12-13 09:39:35.735370] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:23.625 [2024-12-13 09:39:35.735543] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:23.625 true 00:28:23.625 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3e879ef-0d82-4e41-b0af-f068375ebb4d 00:28:23.625 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:23.625 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:23.625 09:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:23.884 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a9cc34ca-e3de-4736-9088-beaabe96146b 00:28:24.143 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:24.143 [2024-12-13 09:39:36.487792] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.403 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:24.403 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:24.403 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3514727 00:28:24.403 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:24.403 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3514727 /var/tmp/bdevperf.sock 00:28:24.403 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3514727 ']' 00:28:24.403 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:24.403 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:24.403 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:24.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:24.403 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:24.403 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.403 [2024-12-13 09:39:36.712954] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:28:24.403 [2024-12-13 09:39:36.712998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3514727 ] 00:28:24.663 [2024-12-13 09:39:36.776538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.663 [2024-12-13 09:39:36.815102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.663 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.663 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:28:24.663 09:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:24.922 Nvme0n1 00:28:24.922 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:25.181 [ 00:28:25.181 { 00:28:25.181 "name": "Nvme0n1", 00:28:25.181 "aliases": [ 00:28:25.181 "a9cc34ca-e3de-4736-9088-beaabe96146b" 00:28:25.181 ], 00:28:25.181 "product_name": "NVMe disk", 00:28:25.181 "block_size": 4096, 00:28:25.181 "num_blocks": 38912, 00:28:25.181 "uuid": "a9cc34ca-e3de-4736-9088-beaabe96146b", 00:28:25.181 "numa_id": 1, 00:28:25.181 "assigned_rate_limits": { 00:28:25.181 "rw_ios_per_sec": 0, 00:28:25.181 "rw_mbytes_per_sec": 0, 00:28:25.181 "r_mbytes_per_sec": 0, 00:28:25.181 "w_mbytes_per_sec": 0 00:28:25.181 }, 00:28:25.181 "claimed": false, 00:28:25.181 "zoned": false, 00:28:25.181 "supported_io_types": { 00:28:25.181 "read": true, 00:28:25.181 "write": true, 00:28:25.181 "unmap": true, 00:28:25.181 "flush": true, 00:28:25.181 "reset": true, 00:28:25.181 "nvme_admin": true, 00:28:25.181 "nvme_io": true, 00:28:25.181 "nvme_io_md": false, 00:28:25.181 "write_zeroes": true, 00:28:25.181 "zcopy": false, 00:28:25.181 "get_zone_info": false, 00:28:25.181 "zone_management": false, 00:28:25.181 "zone_append": false, 00:28:25.181 "compare": true, 00:28:25.181 "compare_and_write": true, 00:28:25.181 "abort": true, 00:28:25.181 "seek_hole": false, 00:28:25.181 "seek_data": false, 00:28:25.181 "copy": true, 00:28:25.181 "nvme_iov_md": false 00:28:25.181 }, 00:28:25.181 "memory_domains": [ 00:28:25.181 { 00:28:25.181 "dma_device_id": "system", 00:28:25.181 "dma_device_type": 1 00:28:25.181 } 00:28:25.181 ], 00:28:25.181 "driver_specific": { 00:28:25.181 "nvme": [ 00:28:25.181 { 00:28:25.181 "trid": { 00:28:25.181 "trtype": "TCP", 00:28:25.181 "adrfam": "IPv4", 00:28:25.181 "traddr": "10.0.0.2", 00:28:25.181 "trsvcid": "4420", 00:28:25.181 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:25.181 }, 00:28:25.181 "ctrlr_data": { 00:28:25.181 "cntlid": 1, 00:28:25.181 "vendor_id": "0x8086", 00:28:25.181 "model_number": "SPDK bdev Controller", 00:28:25.181 "serial_number": "SPDK0", 00:28:25.181 "firmware_revision": "25.01", 00:28:25.181 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:25.181 "oacs": { 00:28:25.181 "security": 0, 00:28:25.181 "format": 0, 00:28:25.181 "firmware": 0, 00:28:25.181 "ns_manage": 0 00:28:25.181 }, 00:28:25.181 "multi_ctrlr": true, 00:28:25.181 "ana_reporting": false 00:28:25.181 }, 00:28:25.181 "vs": { 00:28:25.181 "nvme_version": "1.3" 00:28:25.181 }, 00:28:25.181 "ns_data": { 00:28:25.181 "id": 1, 00:28:25.181 "can_share": true 00:28:25.181 } 00:28:25.181 } 00:28:25.181 ], 00:28:25.181 "mp_policy": "active_passive" 00:28:25.181 } 00:28:25.181 } 00:28:25.181 ] 00:28:25.181 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3514884 00:28:25.181 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:25.181 09:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:25.181 Running I/O for 10 seconds... 00:28:26.118 Latency(us) 00:28:26.118 [2024-12-13T08:39:38.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:26.118 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:28:26.118 [2024-12-13T08:39:38.484Z] =================================================================================================================== 00:28:26.118 [2024-12-13T08:39:38.484Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:28:26.118 00:28:27.055 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b3e879ef-0d82-4e41-b0af-f068375ebb4d 00:28:27.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:27.314 Nvme0n1 : 2.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:28:27.314 [2024-12-13T08:39:39.680Z] =================================================================================================================== 00:28:27.314 [2024-12-13T08:39:39.680Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:28:27.314 00:28:27.314 true 00:28:27.314 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3e879ef-0d82-4e41-b0af-f068375ebb4d 00:28:27.314 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:27.572 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:27.572 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:27.572 09:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3514884 00:28:28.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:28.140 Nvme0n1 : 3.00 22902.33 89.46 0.00 0.00 0.00 0.00 0.00 00:28:28.140 [2024-12-13T08:39:40.506Z] =================================================================================================================== 00:28:28.140 [2024-12-13T08:39:40.506Z] Total : 22902.33 89.46 0.00 0.00 0.00 0.00 0.00 00:28:28.140 00:28:29.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:29.517 Nvme0n1 : 4.00 22971.25 89.73 0.00 0.00 0.00 0.00 0.00 00:28:29.517 [2024-12-13T08:39:41.883Z] =================================================================================================================== 00:28:29.517 [2024-12-13T08:39:41.883Z] Total : 22971.25 89.73 0.00 0.00 0.00 0.00 0.00 00:28:29.517 00:28:30.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:30.455 Nvme0n1 : 5.00 23050.60 90.04 0.00 0.00 0.00 0.00 0.00 00:28:30.455 [2024-12-13T08:39:42.821Z] =================================================================================================================== 00:28:30.455 [2024-12-13T08:39:42.821Z] Total : 23050.60 90.04 0.00 0.00 0.00 0.00 0.00 00:28:30.455 00:28:31.391 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:31.391 Nvme0n1 : 6.00 23124.67 90.33 0.00 0.00 0.00 0.00 0.00 00:28:31.391 [2024-12-13T08:39:43.757Z] =================================================================================================================== 00:28:31.391 [2024-12-13T08:39:43.757Z] Total : 23124.67 90.33 0.00 0.00 0.00 0.00 0.00 00:28:31.391 00:28:32.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:32.328 Nvme0n1 : 7.00 23159.43 90.47 0.00 0.00 0.00 0.00 0.00 00:28:32.328 [2024-12-13T08:39:44.694Z] =================================================================================================================== 00:28:32.328 [2024-12-13T08:39:44.694Z] Total : 23159.43 90.47 0.00 0.00 0.00 0.00 0.00 00:28:32.328 00:28:33.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:33.334 Nvme0n1 : 8.00 23201.38 90.63 0.00 0.00 0.00 0.00 0.00 00:28:33.334 [2024-12-13T08:39:45.700Z] =================================================================================================================== 00:28:33.334 [2024-12-13T08:39:45.700Z] Total : 23201.38 90.63 0.00 0.00 0.00 0.00 0.00 00:28:33.334 00:28:34.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:34.312 Nvme0n1 : 9.00 23219.89 90.70 0.00 0.00 0.00 0.00 0.00 00:28:34.312 [2024-12-13T08:39:46.678Z] =================================================================================================================== 00:28:34.312 [2024-12-13T08:39:46.678Z] Total : 23219.89 90.70 0.00 0.00 0.00 0.00 0.00 00:28:34.312 00:28:35.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:35.248 Nvme0n1 : 10.00 23234.70 90.76 0.00 0.00 0.00 0.00 0.00 00:28:35.248 [2024-12-13T08:39:47.614Z] =================================================================================================================== 00:28:35.248 [2024-12-13T08:39:47.614Z] Total : 23234.70 90.76 0.00 0.00 0.00 0.00 0.00 00:28:35.248 00:28:35.248 00:28:35.248 Latency(us) 00:28:35.248 [2024-12-13T08:39:47.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:35.248 Nvme0n1 : 10.00 23240.27 90.78 0.00 0.00 5504.62 3604.48 15915.89 00:28:35.249 [2024-12-13T08:39:47.615Z] =================================================================================================================== 00:28:35.249 [2024-12-13T08:39:47.615Z] Total : 23240.27 90.78 0.00 0.00 5504.62 3604.48 15915.89 00:28:35.249 { 00:28:35.249 "results": [ 00:28:35.249 { 00:28:35.249 "job": "Nvme0n1", 00:28:35.249 "core_mask": "0x2", 00:28:35.249 "workload": "randwrite", 00:28:35.249 "status": "finished", 00:28:35.249 "queue_depth": 128, 00:28:35.249 "io_size": 4096, 00:28:35.249 "runtime": 10.003112, 00:28:35.249 "iops": 23240.267628713944, 00:28:35.249 "mibps": 90.78229542466384, 00:28:35.249 "io_failed": 0, 00:28:35.249 "io_timeout": 0, 00:28:35.249 "avg_latency_us": 5504.621177060513, 00:28:35.249 "min_latency_us": 3604.48, 00:28:35.249 "max_latency_us": 15915.885714285714 00:28:35.249 } 00:28:35.249 ], 00:28:35.249 "core_count": 1 00:28:35.249 } 00:28:35.249 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3514727 00:28:35.249 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3514727 ']' 00:28:35.249 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3514727 00:28:35.249 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:28:35.249 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.249 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3514727 00:28:35.249 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:35.249 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:35.249 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3514727' 00:28:35.249 killing process with pid 3514727 00:28:35.249 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3514727 00:28:35.249 Received shutdown signal, test time was about 10.000000 seconds 00:28:35.249 00:28:35.249 Latency(us) 00:28:35.249 [2024-12-13T08:39:47.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.249 [2024-12-13T08:39:47.615Z] =================================================================================================================== 00:28:35.249 [2024-12-13T08:39:47.615Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:35.249 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3514727 00:28:35.507 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:35.766 09:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:36.026 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3e879ef-0d82-4e41-b0af-f068375ebb4d 00:28:36.026 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:36.026 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:36.026 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:28:36.026 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:36.284 [2024-12-13 09:39:48.515434] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:36.284 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3e879ef-0d82-4e41-b0af-f068375ebb4d 00:28:36.284 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:28:36.284 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3e879ef-0d82-4e41-b0af-f068375ebb4d 00:28:36.284 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:36.284 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:36.284 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:36.284 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:36.284 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:36.284 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:36.284 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:36.284 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:36.284 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3e879ef-0d82-4e41-b0af-f068375ebb4d 00:28:36.544 request: 00:28:36.544 { 00:28:36.544 "uuid": "b3e879ef-0d82-4e41-b0af-f068375ebb4d", 00:28:36.544 "method": "bdev_lvol_get_lvstores", 00:28:36.544 "req_id": 1 00:28:36.544 } 00:28:36.544 Got JSON-RPC error response 00:28:36.544 response: 00:28:36.544 { 00:28:36.544 "code": -19, 00:28:36.544 "message": "No such device" 00:28:36.544 } 00:28:36.544 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:28:36.544 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:36.544 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:36.544 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:36.544 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:36.803 aio_bdev 00:28:36.803 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a9cc34ca-e3de-4736-9088-beaabe96146b 00:28:36.803 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a9cc34ca-e3de-4736-9088-beaabe96146b 00:28:36.803 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:36.803 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:28:36.803 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:36.803 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:36.803 09:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:36.803 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a9cc34ca-e3de-4736-9088-beaabe96146b -t 2000 00:28:37.062 [ 00:28:37.062 { 00:28:37.062 "name": "a9cc34ca-e3de-4736-9088-beaabe96146b", 00:28:37.062 "aliases": [ 00:28:37.062 "lvs/lvol" 00:28:37.062 ], 00:28:37.062 "product_name": "Logical Volume", 00:28:37.062 "block_size": 4096, 00:28:37.062 "num_blocks": 38912, 00:28:37.062 "uuid": "a9cc34ca-e3de-4736-9088-beaabe96146b", 00:28:37.062 "assigned_rate_limits": { 00:28:37.062 "rw_ios_per_sec": 0, 00:28:37.062 "rw_mbytes_per_sec": 0, 00:28:37.062 "r_mbytes_per_sec": 0, 00:28:37.062 "w_mbytes_per_sec": 0 00:28:37.062 }, 00:28:37.062 "claimed": false, 00:28:37.062 "zoned": false, 00:28:37.062 "supported_io_types": { 00:28:37.062 "read": true, 00:28:37.062 "write": true, 00:28:37.062 "unmap": true, 00:28:37.062 "flush": false, 00:28:37.062 "reset": true, 00:28:37.062 "nvme_admin": false, 00:28:37.062 "nvme_io": false, 00:28:37.062 "nvme_io_md": false, 00:28:37.062 "write_zeroes": true, 00:28:37.062 "zcopy": false, 00:28:37.062 "get_zone_info": false, 00:28:37.062 "zone_management": false, 00:28:37.062 "zone_append": false, 00:28:37.062 "compare": false, 00:28:37.062 "compare_and_write": false, 00:28:37.062 "abort": false, 00:28:37.062 "seek_hole": true, 00:28:37.062 "seek_data": true, 00:28:37.062 "copy": false, 00:28:37.062 "nvme_iov_md": false 00:28:37.062 }, 00:28:37.062 "driver_specific": { 00:28:37.062 "lvol": { 00:28:37.062 "lvol_store_uuid": "b3e879ef-0d82-4e41-b0af-f068375ebb4d", 00:28:37.062 "base_bdev": "aio_bdev", 00:28:37.062 "thin_provision": false, 00:28:37.062 "num_allocated_clusters": 38, 00:28:37.062 "snapshot": false, 00:28:37.062 "clone": false, 00:28:37.062 "esnap_clone": false 00:28:37.062 } 00:28:37.062 } 00:28:37.062 } 00:28:37.062 ] 00:28:37.062 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:28:37.062 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3e879ef-0d82-4e41-b0af-f068375ebb4d 00:28:37.062 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:37.320 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:37.321 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3e879ef-0d82-4e41-b0af-f068375ebb4d 00:28:37.321 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:37.579 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:37.579 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a9cc34ca-e3de-4736-9088-beaabe96146b 00:28:37.579 09:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b3e879ef-0d82-4e41-b0af-f068375ebb4d 00:28:37.838 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:38.097 00:28:38.097 real 0m15.567s 00:28:38.097 user 0m15.173s 00:28:38.097 sys 0m1.414s 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:38.097 ************************************ 00:28:38.097 END TEST lvs_grow_clean 00:28:38.097 ************************************ 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:38.097 ************************************ 00:28:38.097 START TEST lvs_grow_dirty 00:28:38.097 ************************************ 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:38.097 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:38.356 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:38.356 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:38.614 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4d29481f-8ab6-409e-8f71-8bdf276ce8a4 00:28:38.614 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d29481f-8ab6-409e-8f71-8bdf276ce8a4 00:28:38.614 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:38.873 09:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:38.873 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:38.873 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4d29481f-8ab6-409e-8f71-8bdf276ce8a4 lvol 150 00:28:38.873 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5ed06b73-1627-4df0-8f82-4fe5848bcb30 00:28:38.873 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:38.873 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:39.130 [2024-12-13 09:39:51.367306] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:39.130 [2024-12-13 09:39:51.367389] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:39.130 true 00:28:39.130 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d29481f-8ab6-409e-8f71-8bdf276ce8a4 00:28:39.130 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:39.388 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:39.388 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:39.646 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5ed06b73-1627-4df0-8f82-4fe5848bcb30 00:28:39.646 09:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:39.904 [2024-12-13 09:39:52.123587] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.904 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:40.163 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3517254 00:28:40.163 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:40.163 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:40.163 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3517254 /var/tmp/bdevperf.sock 00:28:40.163 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3517254 ']' 00:28:40.163 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:40.163 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.163 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:40.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:40.163 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.163 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:40.163 [2024-12-13 09:39:52.366868] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:28:40.163 [2024-12-13 09:39:52.366914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3517254 ] 00:28:40.163 [2024-12-13 09:39:52.429411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.163 [2024-12-13 09:39:52.470099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.421 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.421 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:40.421 09:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:40.680 Nvme0n1 00:28:40.680 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:40.938 [ 00:28:40.938 { 00:28:40.938 "name": "Nvme0n1", 00:28:40.938 "aliases": [ 00:28:40.938 "5ed06b73-1627-4df0-8f82-4fe5848bcb30" 00:28:40.938 ], 00:28:40.938 "product_name": "NVMe disk", 00:28:40.938 "block_size": 4096, 00:28:40.938 "num_blocks": 38912, 00:28:40.938 "uuid": "5ed06b73-1627-4df0-8f82-4fe5848bcb30", 00:28:40.938 "numa_id": 1, 00:28:40.938 "assigned_rate_limits": { 00:28:40.938 "rw_ios_per_sec": 0, 00:28:40.938 "rw_mbytes_per_sec": 0, 00:28:40.938 "r_mbytes_per_sec": 0, 00:28:40.938 "w_mbytes_per_sec": 0 00:28:40.938 }, 00:28:40.938 "claimed": false, 00:28:40.938 "zoned": false, 00:28:40.938 "supported_io_types": { 00:28:40.938 "read": true, 00:28:40.938 "write": true, 00:28:40.938 "unmap": true, 00:28:40.938 "flush": true, 00:28:40.938 "reset": true, 00:28:40.938 "nvme_admin": true, 00:28:40.938 "nvme_io": true, 00:28:40.938 "nvme_io_md": false, 00:28:40.938 "write_zeroes": true, 00:28:40.938 "zcopy": false, 00:28:40.938 "get_zone_info": false, 00:28:40.938 "zone_management": false, 00:28:40.938 "zone_append": false, 00:28:40.938 "compare": true, 00:28:40.938 "compare_and_write": true, 00:28:40.938 "abort": true, 00:28:40.938 "seek_hole": false, 00:28:40.938 "seek_data": false, 00:28:40.938 "copy": true, 00:28:40.938 "nvme_iov_md": false 00:28:40.938 }, 00:28:40.938 "memory_domains": [ 00:28:40.938 { 00:28:40.938 "dma_device_id": "system", 00:28:40.938 "dma_device_type": 1 00:28:40.938 } 00:28:40.938 ], 00:28:40.938 "driver_specific": { 00:28:40.938 "nvme": [ 00:28:40.938 { 00:28:40.938 "trid": { 00:28:40.938 "trtype": "TCP", 00:28:40.938 "adrfam": "IPv4", 00:28:40.938 "traddr": "10.0.0.2", 00:28:40.938 "trsvcid": "4420", 00:28:40.938 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:40.938 }, 00:28:40.938 "ctrlr_data": { 00:28:40.938 "cntlid": 1, 00:28:40.938 "vendor_id": "0x8086", 00:28:40.938 "model_number": "SPDK bdev Controller", 00:28:40.938 "serial_number": "SPDK0", 00:28:40.938 "firmware_revision": "25.01", 00:28:40.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:40.938 "oacs": { 00:28:40.938 "security": 0, 00:28:40.938 "format": 0, 00:28:40.938 "firmware": 0, 00:28:40.938 "ns_manage": 0 00:28:40.938 }, 00:28:40.938 "multi_ctrlr": true, 00:28:40.938 "ana_reporting": false 00:28:40.938 }, 00:28:40.938 "vs": { 00:28:40.938 "nvme_version": "1.3" 00:28:40.938 }, 00:28:40.938 "ns_data": { 00:28:40.938 "id": 1, 00:28:40.938 "can_share": true 00:28:40.938 } 00:28:40.938 } 00:28:40.938 ], 00:28:40.938 "mp_policy": "active_passive" 00:28:40.938 } 00:28:40.938 } 00:28:40.938 ] 00:28:40.938 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3517467 00:28:40.938 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:40.938 09:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:40.938 Running I/O for 10 seconds... 00:28:42.313 Latency(us) 00:28:42.313 [2024-12-13T08:39:54.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:42.313 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:28:42.313 [2024-12-13T08:39:54.679Z] =================================================================================================================== 00:28:42.313 [2024-12-13T08:39:54.679Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:28:42.313 00:28:42.881 09:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4d29481f-8ab6-409e-8f71-8bdf276ce8a4 00:28:43.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:43.138 Nvme0n1 : 2.00 23019.00 89.92 0.00 0.00 0.00 0.00 0.00 00:28:43.138 [2024-12-13T08:39:55.504Z] =================================================================================================================== 00:28:43.138 [2024-12-13T08:39:55.504Z] Total : 23019.00 89.92 0.00 0.00 0.00 0.00 0.00 00:28:43.138 00:28:43.138 true 00:28:43.138 09:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d29481f-8ab6-409e-8f71-8bdf276ce8a4 00:28:43.138 09:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:43.396 09:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:43.396 09:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:43.396 09:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3517467 00:28:43.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:43.961 Nvme0n1 : 3.00 23045.67 90.02 0.00 0.00 0.00 0.00 0.00 00:28:43.961 [2024-12-13T08:39:56.327Z] =================================================================================================================== 00:28:43.961 [2024-12-13T08:39:56.327Z] Total : 23045.67 90.02 0.00 0.00 0.00 0.00 0.00 00:28:43.961 00:28:45.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:45.336 Nvme0n1 : 4.00 23126.25 90.34 0.00 0.00 0.00 0.00 0.00 00:28:45.336 [2024-12-13T08:39:57.702Z] =================================================================================================================== 00:28:45.336 [2024-12-13T08:39:57.702Z] Total : 23126.25 90.34 0.00 0.00 0.00 0.00 0.00 00:28:45.336 00:28:46.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:46.268 Nvme0n1 : 5.00 23200.00 90.62 0.00 0.00 0.00 0.00 0.00 00:28:46.268 [2024-12-13T08:39:58.634Z] =================================================================================================================== 00:28:46.268 [2024-12-13T08:39:58.634Z] Total : 23200.00 90.62 0.00 0.00 0.00 0.00 0.00 00:28:46.268 00:28:47.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:47.202 Nvme0n1 : 6.00 23228.00 90.73 0.00 0.00 0.00 0.00 0.00 00:28:47.202 [2024-12-13T08:39:59.568Z] =================================================================================================================== 00:28:47.202 [2024-12-13T08:39:59.568Z] Total : 23228.00 90.73 0.00 0.00 0.00 0.00 0.00 00:28:47.202 00:28:48.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:48.136 Nvme0n1 : 7.00 23229.86 90.74 0.00 0.00 0.00 0.00 0.00 00:28:48.136 [2024-12-13T08:40:00.502Z] =================================================================================================================== 00:28:48.136 [2024-12-13T08:40:00.502Z] Total : 23229.86 90.74 0.00 0.00 0.00 0.00 0.00 00:28:48.136 00:28:49.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:49.070 Nvme0n1 : 8.00 23215.38 90.69 0.00 0.00 0.00 0.00 0.00 00:28:49.070 [2024-12-13T08:40:01.436Z] =================================================================================================================== 00:28:49.070 [2024-12-13T08:40:01.436Z] Total : 23215.38 90.69 0.00 0.00 0.00 0.00 0.00 00:28:49.070 00:28:50.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:50.003 Nvme0n1 : 9.00 23246.44 90.81 0.00 0.00 0.00 0.00 0.00 00:28:50.003 [2024-12-13T08:40:02.369Z] =================================================================================================================== 00:28:50.003 [2024-12-13T08:40:02.369Z] Total : 23246.44 90.81 0.00 0.00 0.00 0.00 0.00 00:28:50.003 00:28:51.377 00:28:51.377 Latency(us) 00:28:51.377 [2024-12-13T08:40:03.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:51.377 Nvme0n1 : 10.00 23267.63 90.89 0.00 0.00 5498.29 3214.38 16602.45 00:28:51.377 [2024-12-13T08:40:03.743Z] =================================================================================================================== 00:28:51.377 [2024-12-13T08:40:03.743Z] Total : 23267.63 90.89 0.00 0.00 5498.29 3214.38 16602.45 00:28:51.377 { 00:28:51.377 "results": [ 00:28:51.377 { 00:28:51.377 "job": "Nvme0n1", 00:28:51.377 "core_mask": "0x2", 00:28:51.377 "workload": "randwrite", 00:28:51.377 "status": "finished", 00:28:51.377 "queue_depth": 128, 00:28:51.377 "io_size": 4096, 00:28:51.377 "runtime": 10.001622, 00:28:51.377 "iops": 23267.62599106425, 00:28:51.377 "mibps": 90.88916402759473, 00:28:51.377 "io_failed": 0, 00:28:51.377 "io_timeout": 0, 00:28:51.377 "avg_latency_us": 5498.292558116503, 00:28:51.377 "min_latency_us": 3214.384761904762, 00:28:51.377 "max_latency_us": 16602.453333333335 00:28:51.377 } 00:28:51.377 ], 00:28:51.377 "core_count": 1 00:28:51.377 } 00:28:51.377 09:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3517254 00:28:51.377 09:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3517254 ']' 00:28:51.377 09:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3517254 00:28:51.377 09:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:28:51.377 09:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:51.377 09:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3517254 00:28:51.377 09:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:51.377 09:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:51.377 09:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3517254' 00:28:51.377 killing process with pid 3517254 00:28:51.377 09:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3517254 00:28:51.377 Received shutdown signal, test time was about 10.000000 seconds 00:28:51.377 00:28:51.377 Latency(us) 00:28:51.377 [2024-12-13T08:40:03.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.377 [2024-12-13T08:40:03.743Z] =================================================================================================================== 00:28:51.377 [2024-12-13T08:40:03.743Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:51.377 09:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3517254 00:28:51.377 09:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:51.377 09:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:51.636 09:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d29481f-8ab6-409e-8f71-8bdf276ce8a4 00:28:51.636 09:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:51.894 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:51.894 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:28:51.894 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3514248 00:28:51.894 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3514248 00:28:51.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3514248 Killed "${NVMF_APP[@]}" "$@" 00:28:51.895 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:28:51.895 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:28:51.895 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:51.895 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.895 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:51.895 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3519225 00:28:51.895 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3519225 00:28:51.895 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:51.895 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3519225 ']' 00:28:51.895 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.895 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.895 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.895 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.895 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:51.895 [2024-12-13 09:40:04.225292] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:51.895 [2024-12-13 09:40:04.226220] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:28:51.895 [2024-12-13 09:40:04.226258] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.153 [2024-12-13 09:40:04.294612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.153 [2024-12-13 09:40:04.334727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.153 [2024-12-13 09:40:04.334761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.153 [2024-12-13 09:40:04.334768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.153 [2024-12-13 09:40:04.334774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.153 [2024-12-13 09:40:04.334779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.153 [2024-12-13 09:40:04.335272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.153 [2024-12-13 09:40:04.403273] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:52.153 [2024-12-13 09:40:04.403497] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:52.153 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.153 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:52.153 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:52.153 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:52.153 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:52.153 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.153 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:52.412 [2024-12-13 09:40:04.634541] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:52.412 [2024-12-13 09:40:04.634662] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:52.412 [2024-12-13 09:40:04.634702] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:52.412 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:28:52.412 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5ed06b73-1627-4df0-8f82-4fe5848bcb30 00:28:52.412 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5ed06b73-1627-4df0-8f82-4fe5848bcb30 00:28:52.412 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:52.412 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:52.412 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:52.412 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:52.412 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:52.670 09:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5ed06b73-1627-4df0-8f82-4fe5848bcb30 -t 2000 00:28:52.670 [ 00:28:52.670 { 00:28:52.670 "name": "5ed06b73-1627-4df0-8f82-4fe5848bcb30", 00:28:52.670 "aliases": [ 00:28:52.670 "lvs/lvol" 00:28:52.670 ], 00:28:52.670 "product_name": "Logical Volume", 00:28:52.670 "block_size": 4096, 00:28:52.670 "num_blocks": 38912, 00:28:52.670 "uuid": "5ed06b73-1627-4df0-8f82-4fe5848bcb30", 00:28:52.670 "assigned_rate_limits": { 00:28:52.670 "rw_ios_per_sec": 0, 00:28:52.670 "rw_mbytes_per_sec": 0, 00:28:52.670 "r_mbytes_per_sec": 0, 00:28:52.670 "w_mbytes_per_sec": 0 00:28:52.670 }, 00:28:52.670 "claimed": false, 00:28:52.670 "zoned": false, 00:28:52.670 "supported_io_types": { 00:28:52.670 "read": true, 00:28:52.670 "write": true, 00:28:52.670 "unmap": true, 00:28:52.670 "flush": false, 00:28:52.670 "reset": true, 00:28:52.670 "nvme_admin": false, 00:28:52.670 "nvme_io": false, 00:28:52.670 "nvme_io_md": false, 00:28:52.670 "write_zeroes": true, 00:28:52.670 "zcopy": false, 00:28:52.670 "get_zone_info": false, 00:28:52.670 "zone_management": false, 00:28:52.670 "zone_append": false, 00:28:52.670 "compare": false, 00:28:52.670 "compare_and_write": false, 00:28:52.670 "abort": false, 00:28:52.670 "seek_hole": true, 00:28:52.670 "seek_data": true, 00:28:52.670 "copy": false, 00:28:52.670 "nvme_iov_md": false 00:28:52.670 }, 00:28:52.670 "driver_specific": { 00:28:52.670 "lvol": { 00:28:52.670 "lvol_store_uuid": "4d29481f-8ab6-409e-8f71-8bdf276ce8a4", 00:28:52.670 "base_bdev": "aio_bdev", 00:28:52.670 "thin_provision": false, 00:28:52.670 "num_allocated_clusters": 38, 00:28:52.670 "snapshot": false, 00:28:52.670 "clone": false, 00:28:52.670 "esnap_clone": false 00:28:52.670 } 00:28:52.670 } 00:28:52.670 } 00:28:52.670 ] 00:28:52.930 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:52.930 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d29481f-8ab6-409e-8f71-8bdf276ce8a4 00:28:52.930 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:28:52.930 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:28:52.930 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d29481f-8ab6-409e-8f71-8bdf276ce8a4 00:28:52.930 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:28:53.189 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:28:53.189 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:53.447 [2024-12-13 09:40:05.611738] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:53.447 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d29481f-8ab6-409e-8f71-8bdf276ce8a4 00:28:53.447 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:28:53.447 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d29481f-8ab6-409e-8f71-8bdf276ce8a4 00:28:53.447 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:53.447 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:53.447 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:53.447 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:53.447 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:53.447 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:53.447 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:53.447 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:53.447 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d29481f-8ab6-409e-8f71-8bdf276ce8a4 00:28:53.706 request: 00:28:53.706 { 00:28:53.706 "uuid": "4d29481f-8ab6-409e-8f71-8bdf276ce8a4", 00:28:53.706 "method": "bdev_lvol_get_lvstores", 00:28:53.706 "req_id": 1 00:28:53.706 } 00:28:53.706 Got JSON-RPC error response 00:28:53.706 response: 00:28:53.706 { 00:28:53.706 "code": -19, 00:28:53.706 "message": "No such device" 00:28:53.706 } 00:28:53.706 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:28:53.706 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:53.706 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:53.706 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:53.706 09:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:53.706 aio_bdev 00:28:53.706 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5ed06b73-1627-4df0-8f82-4fe5848bcb30 00:28:53.706 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5ed06b73-1627-4df0-8f82-4fe5848bcb30 00:28:53.706 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:53.706 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:53.706 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:53.706 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:53.706 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:53.964 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5ed06b73-1627-4df0-8f82-4fe5848bcb30 -t 2000 00:28:54.223 [ 00:28:54.223 { 00:28:54.223 "name": "5ed06b73-1627-4df0-8f82-4fe5848bcb30", 00:28:54.223 "aliases": [ 00:28:54.223 "lvs/lvol" 00:28:54.223 ], 00:28:54.223 "product_name": "Logical Volume", 00:28:54.223 "block_size": 4096, 00:28:54.223 "num_blocks": 38912, 00:28:54.223 "uuid": "5ed06b73-1627-4df0-8f82-4fe5848bcb30", 00:28:54.223 "assigned_rate_limits": { 00:28:54.223 "rw_ios_per_sec": 0, 00:28:54.223 "rw_mbytes_per_sec": 0, 00:28:54.223 "r_mbytes_per_sec": 0, 00:28:54.223 "w_mbytes_per_sec": 0 00:28:54.223 }, 00:28:54.223 "claimed": false, 00:28:54.223 "zoned": false, 00:28:54.223 "supported_io_types": { 00:28:54.223 "read": true, 00:28:54.223 "write": true, 00:28:54.223 "unmap": true, 00:28:54.223 "flush": false, 00:28:54.223 "reset": true, 00:28:54.223 "nvme_admin": false, 00:28:54.223 "nvme_io": false, 00:28:54.223 "nvme_io_md": false, 00:28:54.223 "write_zeroes": true, 00:28:54.223 "zcopy": false, 00:28:54.223 "get_zone_info": false, 00:28:54.223 "zone_management": false, 00:28:54.223 "zone_append": false, 00:28:54.223 "compare": false, 00:28:54.223 "compare_and_write": false, 00:28:54.223 "abort": false, 00:28:54.223 "seek_hole": true, 00:28:54.223 "seek_data": true, 00:28:54.223 "copy": false, 00:28:54.223 "nvme_iov_md": false 00:28:54.223 }, 00:28:54.223 "driver_specific": { 00:28:54.223 "lvol": { 00:28:54.223 "lvol_store_uuid": "4d29481f-8ab6-409e-8f71-8bdf276ce8a4", 00:28:54.223 "base_bdev": "aio_bdev", 00:28:54.223 "thin_provision": false, 00:28:54.223 "num_allocated_clusters": 38, 00:28:54.223 "snapshot": false, 00:28:54.223 "clone": false, 00:28:54.223 "esnap_clone": false 00:28:54.223 } 00:28:54.223 } 00:28:54.223 } 00:28:54.223 ] 00:28:54.223 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:54.223 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d29481f-8ab6-409e-8f71-8bdf276ce8a4 00:28:54.223 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:54.481 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:54.481 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d29481f-8ab6-409e-8f71-8bdf276ce8a4 00:28:54.481 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:54.481 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:54.481 09:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5ed06b73-1627-4df0-8f82-4fe5848bcb30 00:28:54.739 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4d29481f-8ab6-409e-8f71-8bdf276ce8a4 00:28:54.997 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:55.255 00:28:55.255 real 0m17.054s 00:28:55.255 user 0m34.500s 00:28:55.255 sys 0m3.774s 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:55.255 ************************************ 00:28:55.255 END TEST lvs_grow_dirty 00:28:55.255 ************************************ 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:55.255 nvmf_trace.0 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:55.255 rmmod nvme_tcp 00:28:55.255 rmmod nvme_fabrics 00:28:55.255 rmmod nvme_keyring 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3519225 ']' 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3519225 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3519225 ']' 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3519225 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:55.255 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3519225 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3519225' 00:28:55.514 killing process with pid 3519225 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3519225 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3519225 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.514 09:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.045 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:58.045 00:28:58.045 real 0m41.194s 00:28:58.045 user 0m51.986s 00:28:58.045 sys 0m9.638s 00:28:58.045 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.045 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:58.045 ************************************ 00:28:58.045 END TEST nvmf_lvs_grow 00:28:58.045 ************************************ 00:28:58.045 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:58.045 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:58.045 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.045 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:58.045 ************************************ 00:28:58.045 START TEST nvmf_bdev_io_wait 00:28:58.045 ************************************ 00:28:58.045 09:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:58.045 * Looking for test storage... 00:28:58.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:58.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.045 --rc genhtml_branch_coverage=1 00:28:58.045 --rc genhtml_function_coverage=1 00:28:58.045 --rc genhtml_legend=1 00:28:58.045 --rc geninfo_all_blocks=1 00:28:58.045 --rc geninfo_unexecuted_blocks=1 00:28:58.045 00:28:58.045 ' 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:58.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.045 --rc genhtml_branch_coverage=1 00:28:58.045 --rc genhtml_function_coverage=1 00:28:58.045 --rc genhtml_legend=1 00:28:58.045 --rc geninfo_all_blocks=1 00:28:58.045 --rc geninfo_unexecuted_blocks=1 00:28:58.045 00:28:58.045 ' 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:58.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.045 --rc genhtml_branch_coverage=1 00:28:58.045 --rc genhtml_function_coverage=1 00:28:58.045 --rc genhtml_legend=1 00:28:58.045 --rc geninfo_all_blocks=1 00:28:58.045 --rc geninfo_unexecuted_blocks=1 00:28:58.045 00:28:58.045 ' 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:58.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.045 --rc genhtml_branch_coverage=1 00:28:58.045 --rc genhtml_function_coverage=1 00:28:58.045 --rc genhtml_legend=1 00:28:58.045 --rc geninfo_all_blocks=1 00:28:58.045 --rc geninfo_unexecuted_blocks=1 00:28:58.045 00:28:58.045 ' 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.045 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.046 09:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:03.311 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:03.311 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:03.311 Found net devices under 0000:af:00.0: cvl_0_0 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.311 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:03.312 Found net devices under 0000:af:00.1: cvl_0_1 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:03.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:03.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:29:03.312 00:29:03.312 --- 10.0.0.2 ping statistics --- 00:29:03.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.312 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:03.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:03.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:29:03.312 00:29:03.312 --- 10.0.0.1 ping statistics --- 00:29:03.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.312 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:03.312 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3523226 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3523226 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3523226 ']' 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:03.571 [2024-12-13 09:40:15.741938] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:03.571 [2024-12-13 09:40:15.742889] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:29:03.571 [2024-12-13 09:40:15.742929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.571 [2024-12-13 09:40:15.809739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:03.571 [2024-12-13 09:40:15.853477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:03.571 [2024-12-13 09:40:15.853510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:03.571 [2024-12-13 09:40:15.853517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:03.571 [2024-12-13 09:40:15.853526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:03.571 [2024-12-13 09:40:15.853531] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:03.571 [2024-12-13 09:40:15.854788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.571 [2024-12-13 09:40:15.854884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:03.571 [2024-12-13 09:40:15.854993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:03.571 [2024-12-13 09:40:15.854994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.571 [2024-12-13 09:40:15.855293] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.571 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:03.830 [2024-12-13 09:40:15.983854] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:03.831 [2024-12-13 09:40:15.983942] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:03.831 [2024-12-13 09:40:15.984607] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:03.831 [2024-12-13 09:40:15.985015] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:03.831 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.831 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:03.831 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.831 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:03.831 [2024-12-13 09:40:15.991679] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.831 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.831 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:03.831 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.831 09:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:03.831 Malloc0 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:03.831 [2024-12-13 09:40:16.043675] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3523261 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3523263 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:03.831 { 00:29:03.831 "params": { 00:29:03.831 "name": "Nvme$subsystem", 00:29:03.831 "trtype": "$TEST_TRANSPORT", 00:29:03.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:03.831 "adrfam": "ipv4", 00:29:03.831 "trsvcid": "$NVMF_PORT", 00:29:03.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:03.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:03.831 "hdgst": ${hdgst:-false}, 00:29:03.831 "ddgst": ${ddgst:-false} 00:29:03.831 }, 00:29:03.831 "method": "bdev_nvme_attach_controller" 00:29:03.831 } 00:29:03.831 EOF 00:29:03.831 )") 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3523265 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3523268 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:03.831 { 00:29:03.831 "params": { 00:29:03.831 "name": "Nvme$subsystem", 00:29:03.831 "trtype": "$TEST_TRANSPORT", 00:29:03.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:03.831 "adrfam": "ipv4", 00:29:03.831 "trsvcid": "$NVMF_PORT", 00:29:03.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:03.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:03.831 "hdgst": ${hdgst:-false}, 00:29:03.831 "ddgst": ${ddgst:-false} 00:29:03.831 }, 00:29:03.831 "method": "bdev_nvme_attach_controller" 00:29:03.831 } 00:29:03.831 EOF 00:29:03.831 )") 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:03.831 { 00:29:03.831 "params": { 00:29:03.831 "name": "Nvme$subsystem", 00:29:03.831 "trtype": "$TEST_TRANSPORT", 00:29:03.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:03.831 "adrfam": "ipv4", 00:29:03.831 "trsvcid": "$NVMF_PORT", 00:29:03.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:03.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:03.831 "hdgst": ${hdgst:-false}, 00:29:03.831 "ddgst": ${ddgst:-false} 00:29:03.831 }, 00:29:03.831 "method": "bdev_nvme_attach_controller" 00:29:03.831 } 00:29:03.831 EOF 00:29:03.831 )") 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:03.831 { 00:29:03.831 "params": { 00:29:03.831 "name": "Nvme$subsystem", 00:29:03.831 "trtype": "$TEST_TRANSPORT", 00:29:03.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:03.831 "adrfam": "ipv4", 00:29:03.831 "trsvcid": "$NVMF_PORT", 00:29:03.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:03.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:03.831 "hdgst": ${hdgst:-false}, 00:29:03.831 "ddgst": ${ddgst:-false} 00:29:03.831 }, 00:29:03.831 "method": "bdev_nvme_attach_controller" 00:29:03.831 } 00:29:03.831 EOF 00:29:03.831 )") 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3523261 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:03.831 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:03.831 "params": { 00:29:03.831 "name": "Nvme1", 00:29:03.831 "trtype": "tcp", 00:29:03.831 "traddr": "10.0.0.2", 00:29:03.831 "adrfam": "ipv4", 00:29:03.831 "trsvcid": "4420", 00:29:03.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:03.831 "hdgst": false, 00:29:03.831 "ddgst": false 00:29:03.831 }, 00:29:03.832 "method": "bdev_nvme_attach_controller" 00:29:03.832 }' 00:29:03.832 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:03.832 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:03.832 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:03.832 "params": { 00:29:03.832 "name": "Nvme1", 00:29:03.832 "trtype": "tcp", 00:29:03.832 "traddr": "10.0.0.2", 00:29:03.832 "adrfam": "ipv4", 00:29:03.832 "trsvcid": "4420", 00:29:03.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:03.832 "hdgst": false, 00:29:03.832 "ddgst": false 00:29:03.832 }, 00:29:03.832 "method": "bdev_nvme_attach_controller" 00:29:03.832 }' 00:29:03.832 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:03.832 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:03.832 "params": { 00:29:03.832 "name": "Nvme1", 00:29:03.832 "trtype": "tcp", 00:29:03.832 "traddr": "10.0.0.2", 00:29:03.832 "adrfam": "ipv4", 00:29:03.832 "trsvcid": "4420", 00:29:03.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:03.832 "hdgst": false, 00:29:03.832 "ddgst": false 00:29:03.832 }, 00:29:03.832 "method": "bdev_nvme_attach_controller" 00:29:03.832 }' 00:29:03.832 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:03.832 09:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:03.832 "params": { 00:29:03.832 "name": "Nvme1", 00:29:03.832 "trtype": "tcp", 00:29:03.832 "traddr": "10.0.0.2", 00:29:03.832 "adrfam": "ipv4", 00:29:03.832 "trsvcid": "4420", 00:29:03.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:03.832 "hdgst": false, 00:29:03.832 "ddgst": false 00:29:03.832 }, 00:29:03.832 "method": "bdev_nvme_attach_controller" 00:29:03.832 }' 00:29:03.832 [2024-12-13 09:40:16.094046] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:29:03.832 [2024-12-13 09:40:16.094098] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:03.832 [2024-12-13 09:40:16.096600] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:29:03.832 [2024-12-13 09:40:16.096650] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:03.832 [2024-12-13 09:40:16.098898] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:29:03.832 [2024-12-13 09:40:16.098940] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:03.832 [2024-12-13 09:40:16.100459] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:29:03.832 [2024-12-13 09:40:16.100500] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:04.090 [2024-12-13 09:40:16.277600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.090 [2024-12-13 09:40:16.323401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:04.090 [2024-12-13 09:40:16.379750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.090 [2024-12-13 09:40:16.419641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.090 [2024-12-13 09:40:16.434209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:04.349 [2024-12-13 09:40:16.462842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:04.349 [2024-12-13 09:40:16.478960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.349 [2024-12-13 09:40:16.520209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:04.349 Running I/O for 1 seconds... 00:29:04.349 Running I/O for 1 seconds... 00:29:04.607 Running I/O for 1 seconds... 00:29:04.607 Running I/O for 1 seconds... 00:29:05.541 12223.00 IOPS, 47.75 MiB/s 00:29:05.541 Latency(us) 00:29:05.541 [2024-12-13T08:40:17.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.541 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:05.541 Nvme1n1 : 1.01 12282.18 47.98 0.00 0.00 10388.15 3526.46 12670.29 00:29:05.541 [2024-12-13T08:40:17.908Z] =================================================================================================================== 00:29:05.542 [2024-12-13T08:40:17.908Z] Total : 12282.18 47.98 0.00 0.00 10388.15 3526.46 12670.29 00:29:05.542 11259.00 IOPS, 43.98 MiB/s 00:29:05.542 Latency(us) 00:29:05.542 [2024-12-13T08:40:17.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.542 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:05.542 Nvme1n1 : 1.01 11334.07 44.27 0.00 0.00 11260.87 4244.24 14355.50 00:29:05.542 [2024-12-13T08:40:17.908Z] =================================================================================================================== 00:29:05.542 [2024-12-13T08:40:17.908Z] Total : 11334.07 44.27 0.00 0.00 11260.87 4244.24 14355.50 00:29:05.542 09:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3523263 00:29:05.542 10834.00 IOPS, 42.32 MiB/s 00:29:05.542 Latency(us) 00:29:05.542 [2024-12-13T08:40:17.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.542 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:05.542 Nvme1n1 : 1.01 10911.34 42.62 0.00 0.00 11700.28 3947.76 17351.44 00:29:05.542 [2024-12-13T08:40:17.908Z] =================================================================================================================== 00:29:05.542 [2024-12-13T08:40:17.908Z] Total : 10911.34 42.62 0.00 0.00 11700.28 3947.76 17351.44 00:29:05.542 243392.00 IOPS, 950.75 MiB/s 00:29:05.542 Latency(us) 00:29:05.542 [2024-12-13T08:40:17.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.542 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:05.542 Nvme1n1 : 1.00 243028.17 949.33 0.00 0.00 523.72 222.35 1482.36 00:29:05.542 [2024-12-13T08:40:17.908Z] =================================================================================================================== 00:29:05.542 [2024-12-13T08:40:17.908Z] Total : 243028.17 949.33 0.00 0.00 523.72 222.35 1482.36 00:29:05.542 09:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3523265 00:29:05.800 09:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3523268 00:29:05.800 09:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:05.800 09:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.800 09:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:05.800 09:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.800 09:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:05.800 09:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:05.800 09:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:05.800 09:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:05.800 09:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:05.800 09:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:05.800 09:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.800 09:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:05.800 rmmod nvme_tcp 00:29:05.800 rmmod nvme_fabrics 00:29:05.800 rmmod nvme_keyring 00:29:05.800 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.800 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:05.800 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:05.800 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3523226 ']' 00:29:05.800 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3523226 00:29:05.800 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3523226 ']' 00:29:05.800 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3523226 00:29:05.800 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:29:05.800 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.800 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3523226 00:29:05.800 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:05.800 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:05.800 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3523226' 00:29:05.800 killing process with pid 3523226 00:29:05.800 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3523226 00:29:05.800 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3523226 00:29:06.076 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:06.076 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:06.076 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:06.076 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:06.076 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:06.076 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:29:06.076 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:29:06.076 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:06.076 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:06.076 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.076 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.076 09:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.979 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.979 00:29:07.979 real 0m10.352s 00:29:07.979 user 0m15.009s 00:29:07.979 sys 0m6.045s 00:29:07.979 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.979 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:07.979 ************************************ 00:29:07.979 END TEST nvmf_bdev_io_wait 00:29:07.979 ************************************ 00:29:07.979 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:07.979 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:07.979 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.979 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:08.239 ************************************ 00:29:08.239 START TEST nvmf_queue_depth 00:29:08.239 ************************************ 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:08.239 * Looking for test storage... 00:29:08.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:08.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.239 --rc genhtml_branch_coverage=1 00:29:08.239 --rc genhtml_function_coverage=1 00:29:08.239 --rc genhtml_legend=1 00:29:08.239 --rc geninfo_all_blocks=1 00:29:08.239 --rc geninfo_unexecuted_blocks=1 00:29:08.239 00:29:08.239 ' 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:08.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.239 --rc genhtml_branch_coverage=1 00:29:08.239 --rc genhtml_function_coverage=1 00:29:08.239 --rc genhtml_legend=1 00:29:08.239 --rc geninfo_all_blocks=1 00:29:08.239 --rc geninfo_unexecuted_blocks=1 00:29:08.239 00:29:08.239 ' 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:08.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.239 --rc genhtml_branch_coverage=1 00:29:08.239 --rc genhtml_function_coverage=1 00:29:08.239 --rc genhtml_legend=1 00:29:08.239 --rc geninfo_all_blocks=1 00:29:08.239 --rc geninfo_unexecuted_blocks=1 00:29:08.239 00:29:08.239 ' 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:08.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.239 --rc genhtml_branch_coverage=1 00:29:08.239 --rc genhtml_function_coverage=1 00:29:08.239 --rc genhtml_legend=1 00:29:08.239 --rc geninfo_all_blocks=1 00:29:08.239 --rc geninfo_unexecuted_blocks=1 00:29:08.239 00:29:08.239 ' 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.239 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:08.240 09:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:13.515 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:13.515 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:13.515 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:13.516 Found net devices under 0000:af:00.0: cvl_0_0 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:13.516 Found net devices under 0000:af:00.1: cvl_0_1 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:29:13.516 00:29:13.516 --- 10.0.0.2 ping statistics --- 00:29:13.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.516 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:29:13.516 00:29:13.516 --- 10.0.0.1 ping statistics --- 00:29:13.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.516 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.516 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:13.776 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3526963 00:29:13.776 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3526963 00:29:13.776 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:13.776 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3526963 ']' 00:29:13.776 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.776 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.776 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.776 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.776 09:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:13.776 [2024-12-13 09:40:25.932790] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:13.776 [2024-12-13 09:40:25.933662] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:29:13.776 [2024-12-13 09:40:25.933694] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.776 [2024-12-13 09:40:26.001613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.776 [2024-12-13 09:40:26.038514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.776 [2024-12-13 09:40:26.038549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.776 [2024-12-13 09:40:26.038556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.776 [2024-12-13 09:40:26.038561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.776 [2024-12-13 09:40:26.038566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.776 [2024-12-13 09:40:26.039064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.776 [2024-12-13 09:40:26.106563] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:13.776 [2024-12-13 09:40:26.106766] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:13.776 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.776 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:13.776 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.776 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.776 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:14.036 [2024-12-13 09:40:26.167666] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:14.036 Malloc0 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:14.036 [2024-12-13 09:40:26.243641] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3526989 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3526989 /var/tmp/bdevperf.sock 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3526989 ']' 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:14.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.036 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:14.036 [2024-12-13 09:40:26.295400] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:29:14.036 [2024-12-13 09:40:26.295442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3526989 ] 00:29:14.036 [2024-12-13 09:40:26.357546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.036 [2024-12-13 09:40:26.397501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.295 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.295 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:14.295 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:14.295 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.295 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:14.295 NVMe0n1 00:29:14.295 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.295 09:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:14.295 Running I/O for 10 seconds... 00:29:16.606 11303.00 IOPS, 44.15 MiB/s [2024-12-13T08:40:29.907Z] 11806.00 IOPS, 46.12 MiB/s [2024-12-13T08:40:30.842Z] 11951.67 IOPS, 46.69 MiB/s [2024-12-13T08:40:31.776Z] 12028.25 IOPS, 46.99 MiB/s [2024-12-13T08:40:32.711Z] 12084.60 IOPS, 47.21 MiB/s [2024-12-13T08:40:34.088Z] 12150.67 IOPS, 47.46 MiB/s [2024-12-13T08:40:35.024Z] 12188.43 IOPS, 47.61 MiB/s [2024-12-13T08:40:35.960Z] 12221.00 IOPS, 47.74 MiB/s [2024-12-13T08:40:36.896Z] 12271.89 IOPS, 47.94 MiB/s [2024-12-13T08:40:36.896Z] 12285.10 IOPS, 47.99 MiB/s 00:29:24.530 Latency(us) 00:29:24.530 [2024-12-13T08:40:36.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.530 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:29:24.530 Verification LBA range: start 0x0 length 0x4000 00:29:24.530 NVMe0n1 : 10.10 12266.38 47.92 0.00 0.00 82886.57 18974.23 59668.97 00:29:24.530 [2024-12-13T08:40:36.896Z] =================================================================================================================== 00:29:24.530 [2024-12-13T08:40:36.896Z] Total : 12266.38 47.92 0.00 0.00 82886.57 18974.23 59668.97 00:29:24.530 { 00:29:24.530 "results": [ 00:29:24.530 { 00:29:24.530 "job": "NVMe0n1", 00:29:24.530 "core_mask": "0x1", 00:29:24.530 "workload": "verify", 00:29:24.530 "status": "finished", 00:29:24.530 "verify_range": { 00:29:24.530 "start": 0, 00:29:24.530 "length": 16384 00:29:24.530 }, 00:29:24.530 "queue_depth": 1024, 00:29:24.530 "io_size": 4096, 00:29:24.530 "runtime": 10.095398, 00:29:24.530 "iops": 12266.381176849094, 00:29:24.530 "mibps": 47.915551472066774, 00:29:24.530 "io_failed": 0, 00:29:24.530 "io_timeout": 0, 00:29:24.530 "avg_latency_us": 82886.57330235484, 00:29:24.530 "min_latency_us": 18974.23238095238, 00:29:24.530 "max_latency_us": 59668.96761904762 00:29:24.530 } 00:29:24.530 ], 00:29:24.530 "core_count": 1 00:29:24.530 } 00:29:24.531 09:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3526989 00:29:24.531 09:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3526989 ']' 00:29:24.531 09:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3526989 00:29:24.531 09:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:24.531 09:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.531 09:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3526989 00:29:24.531 09:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:24.531 09:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:24.531 09:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3526989' 00:29:24.531 killing process with pid 3526989 00:29:24.531 09:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3526989 00:29:24.531 Received shutdown signal, test time was about 10.000000 seconds 00:29:24.531 00:29:24.531 Latency(us) 00:29:24.531 [2024-12-13T08:40:36.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.531 [2024-12-13T08:40:36.897Z] =================================================================================================================== 00:29:24.531 [2024-12-13T08:40:36.897Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.531 09:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3526989 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:24.790 rmmod nvme_tcp 00:29:24.790 rmmod nvme_fabrics 00:29:24.790 rmmod nvme_keyring 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3526963 ']' 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3526963 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3526963 ']' 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3526963 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3526963 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3526963' 00:29:24.790 killing process with pid 3526963 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3526963 00:29:24.790 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3526963 00:29:25.050 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:25.050 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:25.050 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:25.050 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:29:25.050 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:29:25.050 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:25.050 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:29:25.050 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:25.050 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:25.050 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.050 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.050 09:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.150 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:27.150 00:29:27.150 real 0m19.043s 00:29:27.150 user 0m22.453s 00:29:27.150 sys 0m5.833s 00:29:27.150 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:27.150 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:27.150 ************************************ 00:29:27.150 END TEST nvmf_queue_depth 00:29:27.150 ************************************ 00:29:27.150 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:27.150 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:27.150 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:27.150 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:27.150 ************************************ 00:29:27.150 START TEST nvmf_target_multipath 00:29:27.150 ************************************ 00:29:27.150 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:27.409 * Looking for test storage... 00:29:27.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:27.409 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:27.409 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:29:27.409 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:27.409 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:27.409 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:27.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.410 --rc genhtml_branch_coverage=1 00:29:27.410 --rc genhtml_function_coverage=1 00:29:27.410 --rc genhtml_legend=1 00:29:27.410 --rc geninfo_all_blocks=1 00:29:27.410 --rc geninfo_unexecuted_blocks=1 00:29:27.410 00:29:27.410 ' 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:27.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.410 --rc genhtml_branch_coverage=1 00:29:27.410 --rc genhtml_function_coverage=1 00:29:27.410 --rc genhtml_legend=1 00:29:27.410 --rc geninfo_all_blocks=1 00:29:27.410 --rc geninfo_unexecuted_blocks=1 00:29:27.410 00:29:27.410 ' 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:27.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.410 --rc genhtml_branch_coverage=1 00:29:27.410 --rc genhtml_function_coverage=1 00:29:27.410 --rc genhtml_legend=1 00:29:27.410 --rc geninfo_all_blocks=1 00:29:27.410 --rc geninfo_unexecuted_blocks=1 00:29:27.410 00:29:27.410 ' 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:27.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.410 --rc genhtml_branch_coverage=1 00:29:27.410 --rc genhtml_function_coverage=1 00:29:27.410 --rc genhtml_legend=1 00:29:27.410 --rc geninfo_all_blocks=1 00:29:27.410 --rc geninfo_unexecuted_blocks=1 00:29:27.410 00:29:27.410 ' 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.410 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:29:27.411 09:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:33.981 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:33.981 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.981 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:33.982 Found net devices under 0000:af:00.0: cvl_0_0 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:33.982 Found net devices under 0000:af:00.1: cvl_0_1 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:33.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:29:33.982 00:29:33.982 --- 10.0.0.2 ping statistics --- 00:29:33.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.982 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:29:33.982 00:29:33.982 --- 10.0.0.1 ping statistics --- 00:29:33.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.982 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:29:33.982 only one NIC for nvmf test 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:33.982 rmmod nvme_tcp 00:29:33.982 rmmod nvme_fabrics 00:29:33.982 rmmod nvme_keyring 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:33.982 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:33.983 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:33.983 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:33.983 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:33.983 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:33.983 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:33.983 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:33.983 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:33.983 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:33.983 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:33.983 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:33.983 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:33.983 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.983 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:33.983 09:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:35.362 00:29:35.362 real 0m8.024s 00:29:35.362 user 0m1.741s 00:29:35.362 sys 0m4.293s 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:35.362 ************************************ 00:29:35.362 END TEST nvmf_target_multipath 00:29:35.362 ************************************ 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:35.362 ************************************ 00:29:35.362 START TEST nvmf_zcopy 00:29:35.362 ************************************ 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:35.362 * Looking for test storage... 00:29:35.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:35.362 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:29:35.621 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:35.621 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:35.621 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:35.621 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:35.621 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:29:35.621 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:29:35.621 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:29:35.621 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:29:35.621 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:29:35.621 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:29:35.621 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:29:35.621 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:35.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.622 --rc genhtml_branch_coverage=1 00:29:35.622 --rc genhtml_function_coverage=1 00:29:35.622 --rc genhtml_legend=1 00:29:35.622 --rc geninfo_all_blocks=1 00:29:35.622 --rc geninfo_unexecuted_blocks=1 00:29:35.622 00:29:35.622 ' 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:35.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.622 --rc genhtml_branch_coverage=1 00:29:35.622 --rc genhtml_function_coverage=1 00:29:35.622 --rc genhtml_legend=1 00:29:35.622 --rc geninfo_all_blocks=1 00:29:35.622 --rc geninfo_unexecuted_blocks=1 00:29:35.622 00:29:35.622 ' 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:35.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.622 --rc genhtml_branch_coverage=1 00:29:35.622 --rc genhtml_function_coverage=1 00:29:35.622 --rc genhtml_legend=1 00:29:35.622 --rc geninfo_all_blocks=1 00:29:35.622 --rc geninfo_unexecuted_blocks=1 00:29:35.622 00:29:35.622 ' 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:35.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.622 --rc genhtml_branch_coverage=1 00:29:35.622 --rc genhtml_function_coverage=1 00:29:35.622 --rc genhtml_legend=1 00:29:35.622 --rc geninfo_all_blocks=1 00:29:35.622 --rc geninfo_unexecuted_blocks=1 00:29:35.622 00:29:35.622 ' 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:35.622 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:35.623 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:35.623 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:35.623 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:35.623 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.623 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.623 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.623 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:35.623 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:35.623 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:29:35.623 09:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:40.896 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:40.896 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:40.896 Found net devices under 0000:af:00.0: cvl_0_0 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:40.896 Found net devices under 0000:af:00.1: cvl_0_1 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:40.896 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:40.897 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:40.897 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:40.897 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:40.897 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.897 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:40.897 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:40.897 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:40.897 09:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:40.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:40.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:29:40.897 00:29:40.897 --- 10.0.0.2 ping statistics --- 00:29:40.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.897 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:40.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:40.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:29:40.897 00:29:40.897 --- 10.0.0.1 ping statistics --- 00:29:40.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.897 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3535470 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3535470 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3535470 ']' 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.897 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:40.897 [2024-12-13 09:40:53.227065] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:40.897 [2024-12-13 09:40:53.228013] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:29:40.897 [2024-12-13 09:40:53.228049] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.156 [2024-12-13 09:40:53.293125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.156 [2024-12-13 09:40:53.332962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.156 [2024-12-13 09:40:53.332991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.156 [2024-12-13 09:40:53.332998] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.156 [2024-12-13 09:40:53.333004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.156 [2024-12-13 09:40:53.333009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.157 [2024-12-13 09:40:53.333481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.157 [2024-12-13 09:40:53.401304] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:41.157 [2024-12-13 09:40:53.401528] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:41.157 [2024-12-13 09:40:53.470129] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:41.157 [2024-12-13 09:40:53.494256] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.157 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:41.415 malloc0 00:29:41.415 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.415 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:41.415 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.415 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:41.415 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.415 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:29:41.415 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:29:41.415 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:41.415 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:41.415 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:41.415 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:41.415 { 00:29:41.415 "params": { 00:29:41.415 "name": "Nvme$subsystem", 00:29:41.415 "trtype": "$TEST_TRANSPORT", 00:29:41.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.415 "adrfam": "ipv4", 00:29:41.415 "trsvcid": "$NVMF_PORT", 00:29:41.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.416 "hdgst": ${hdgst:-false}, 00:29:41.416 "ddgst": ${ddgst:-false} 00:29:41.416 }, 00:29:41.416 "method": "bdev_nvme_attach_controller" 00:29:41.416 } 00:29:41.416 EOF 00:29:41.416 )") 00:29:41.416 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:41.416 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:41.416 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:41.416 09:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:41.416 "params": { 00:29:41.416 "name": "Nvme1", 00:29:41.416 "trtype": "tcp", 00:29:41.416 "traddr": "10.0.0.2", 00:29:41.416 "adrfam": "ipv4", 00:29:41.416 "trsvcid": "4420", 00:29:41.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:41.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:41.416 "hdgst": false, 00:29:41.416 "ddgst": false 00:29:41.416 }, 00:29:41.416 "method": "bdev_nvme_attach_controller" 00:29:41.416 }' 00:29:41.416 [2024-12-13 09:40:53.587148] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:29:41.416 [2024-12-13 09:40:53.587195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3535584 ] 00:29:41.416 [2024-12-13 09:40:53.652794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.416 [2024-12-13 09:40:53.694320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.674 Running I/O for 10 seconds... 00:29:43.988 8542.00 IOPS, 66.73 MiB/s [2024-12-13T08:40:57.295Z] 8583.00 IOPS, 67.05 MiB/s [2024-12-13T08:40:58.231Z] 8615.00 IOPS, 67.30 MiB/s [2024-12-13T08:40:59.168Z] 8630.50 IOPS, 67.43 MiB/s [2024-12-13T08:41:00.104Z] 8639.20 IOPS, 67.49 MiB/s [2024-12-13T08:41:01.040Z] 8644.33 IOPS, 67.53 MiB/s [2024-12-13T08:41:02.417Z] 8623.00 IOPS, 67.37 MiB/s [2024-12-13T08:41:03.354Z] 8635.62 IOPS, 67.47 MiB/s [2024-12-13T08:41:04.292Z] 8641.00 IOPS, 67.51 MiB/s [2024-12-13T08:41:04.292Z] 8650.50 IOPS, 67.58 MiB/s 00:29:51.926 Latency(us) 00:29:51.926 [2024-12-13T08:41:04.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.926 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:29:51.926 Verification LBA range: start 0x0 length 0x1000 00:29:51.926 Nvme1n1 : 10.01 8653.04 67.60 0.00 0.00 14750.25 2309.36 21096.35 00:29:51.927 [2024-12-13T08:41:04.293Z] =================================================================================================================== 00:29:51.927 [2024-12-13T08:41:04.293Z] Total : 8653.04 67.60 0.00 0.00 14750.25 2309.36 21096.35 00:29:51.927 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3537265 00:29:51.927 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:29:51.927 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:51.927 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:29:51.927 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:29:51.927 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:51.927 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:51.927 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:51.927 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:51.927 { 00:29:51.927 "params": { 00:29:51.927 "name": "Nvme$subsystem", 00:29:51.927 "trtype": "$TEST_TRANSPORT", 00:29:51.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.927 "adrfam": "ipv4", 00:29:51.927 "trsvcid": "$NVMF_PORT", 00:29:51.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.927 "hdgst": ${hdgst:-false}, 00:29:51.927 "ddgst": ${ddgst:-false} 00:29:51.927 }, 00:29:51.927 "method": "bdev_nvme_attach_controller" 00:29:51.927 } 00:29:51.927 EOF 00:29:51.927 )") 00:29:51.927 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:51.927 [2024-12-13 09:41:04.173816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.927 [2024-12-13 09:41:04.173854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.927 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:51.927 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:51.927 09:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:51.927 "params": { 00:29:51.927 "name": "Nvme1", 00:29:51.927 "trtype": "tcp", 00:29:51.927 "traddr": "10.0.0.2", 00:29:51.927 "adrfam": "ipv4", 00:29:51.927 "trsvcid": "4420", 00:29:51.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:51.927 "hdgst": false, 00:29:51.927 "ddgst": false 00:29:51.927 }, 00:29:51.927 "method": "bdev_nvme_attach_controller" 00:29:51.927 }' 00:29:51.927 [2024-12-13 09:41:04.185780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.927 [2024-12-13 09:41:04.185794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.927 [2024-12-13 09:41:04.197775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.927 [2024-12-13 09:41:04.197784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.927 [2024-12-13 09:41:04.209777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.927 [2024-12-13 09:41:04.209786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.927 [2024-12-13 09:41:04.214468] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:29:51.927 [2024-12-13 09:41:04.214507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3537265 ] 00:29:51.927 [2024-12-13 09:41:04.221775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.927 [2024-12-13 09:41:04.221785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.927 [2024-12-13 09:41:04.233775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.927 [2024-12-13 09:41:04.233784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.927 [2024-12-13 09:41:04.245777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.927 [2024-12-13 09:41:04.245786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.927 [2024-12-13 09:41:04.257774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.927 [2024-12-13 09:41:04.257783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.927 [2024-12-13 09:41:04.269773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.927 [2024-12-13 09:41:04.269782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.927 [2024-12-13 09:41:04.277796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.927 [2024-12-13 09:41:04.281774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.927 [2024-12-13 09:41:04.281783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.293778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.293791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.305773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.305782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.317774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.317785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.319818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.187 [2024-12-13 09:41:04.329781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.329793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.341784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.341801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.353778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.353791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.365775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.365786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.377786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.377803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.389776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.389786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.402115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.402131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.413782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.413798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.425778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.425791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.437777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.437789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.449774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.449783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.461774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.461784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.473776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.473789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.485777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.485791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.497777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.497792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.509784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.509800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 Running I/O for 5 seconds... 00:29:52.187 [2024-12-13 09:41:04.523498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.523517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.538074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.538092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.187 [2024-12-13 09:41:04.551781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.187 [2024-12-13 09:41:04.551800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.566716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.566734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.581700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.581720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.593046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.593064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.607765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.607783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.622082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.622100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.637550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.637569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.651946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.651964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.666685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.666705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.681628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.681650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.695909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.695929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.710701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.710719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.726289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.726307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.737535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.737553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.751585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.751603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.766315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.766333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.781708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.781727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.796092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.796110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.447 [2024-12-13 09:41:04.810992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.447 [2024-12-13 09:41:04.811011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.706 [2024-12-13 09:41:04.825665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.706 [2024-12-13 09:41:04.825682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.706 [2024-12-13 09:41:04.838949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.706 [2024-12-13 09:41:04.838967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.706 [2024-12-13 09:41:04.853695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.706 [2024-12-13 09:41:04.853714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.706 [2024-12-13 09:41:04.864663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.706 [2024-12-13 09:41:04.864681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.706 [2024-12-13 09:41:04.879468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.706 [2024-12-13 09:41:04.879487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.706 [2024-12-13 09:41:04.894312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.706 [2024-12-13 09:41:04.894330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.706 [2024-12-13 09:41:04.909631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.706 [2024-12-13 09:41:04.909648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.706 [2024-12-13 09:41:04.923427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.707 [2024-12-13 09:41:04.923444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.707 [2024-12-13 09:41:04.937930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.707 [2024-12-13 09:41:04.937948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.707 [2024-12-13 09:41:04.950640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.707 [2024-12-13 09:41:04.950657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.707 [2024-12-13 09:41:04.965678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.707 [2024-12-13 09:41:04.965698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.707 [2024-12-13 09:41:04.979869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.707 [2024-12-13 09:41:04.979887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.707 [2024-12-13 09:41:04.994008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.707 [2024-12-13 09:41:04.994026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.707 [2024-12-13 09:41:05.006710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.707 [2024-12-13 09:41:05.006729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.707 [2024-12-13 09:41:05.021862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.707 [2024-12-13 09:41:05.021885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.707 [2024-12-13 09:41:05.033278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.707 [2024-12-13 09:41:05.033297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.707 [2024-12-13 09:41:05.047132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.707 [2024-12-13 09:41:05.047151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.707 [2024-12-13 09:41:05.061790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.707 [2024-12-13 09:41:05.061808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.075496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.075514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.090306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.090323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.106492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.106513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.121963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.121981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.135626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.135644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.150435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.150459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.166076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.166093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.181465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.181484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.196216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.196234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.210365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.210382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.225567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.225585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.239737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.239755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.254550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.254567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.270072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.270089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.283305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.283322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.298165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.298186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.313621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.313639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.965 [2024-12-13 09:41:05.327012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.965 [2024-12-13 09:41:05.327030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.338343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.338361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.351496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.351514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.365827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.365844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.377911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.377937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.391517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.391535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.406405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.406423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.421812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.421830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.435836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.435854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.450259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.450276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.465520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.465538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.479358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.479375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.493862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.493880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.506354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.506372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 16781.00 IOPS, 131.10 MiB/s [2024-12-13T08:41:05.590Z] [2024-12-13 09:41:05.519431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.519456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.533729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.533747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.546615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.546632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.561768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.561785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.573144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.573161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.224 [2024-12-13 09:41:05.587746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.224 [2024-12-13 09:41:05.587764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.483 [2024-12-13 09:41:05.602781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.483 [2024-12-13 09:41:05.602798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.483 [2024-12-13 09:41:05.617369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.483 [2024-12-13 09:41:05.617387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.483 [2024-12-13 09:41:05.631651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.483 [2024-12-13 09:41:05.631667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.483 [2024-12-13 09:41:05.646297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.483 [2024-12-13 09:41:05.646318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.483 [2024-12-13 09:41:05.661233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.483 [2024-12-13 09:41:05.661250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.483 [2024-12-13 09:41:05.675311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.483 [2024-12-13 09:41:05.675328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.483 [2024-12-13 09:41:05.690268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.483 [2024-12-13 09:41:05.690285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.483 [2024-12-13 09:41:05.703616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.484 [2024-12-13 09:41:05.703634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.484 [2024-12-13 09:41:05.718366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.484 [2024-12-13 09:41:05.718383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.484 [2024-12-13 09:41:05.733546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.484 [2024-12-13 09:41:05.733563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.484 [2024-12-13 09:41:05.747765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.484 [2024-12-13 09:41:05.747783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.484 [2024-12-13 09:41:05.762405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.484 [2024-12-13 09:41:05.762422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.484 [2024-12-13 09:41:05.774339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.484 [2024-12-13 09:41:05.774356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.484 [2024-12-13 09:41:05.790142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.484 [2024-12-13 09:41:05.790160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.484 [2024-12-13 09:41:05.806105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.484 [2024-12-13 09:41:05.806122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.484 [2024-12-13 09:41:05.819388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.484 [2024-12-13 09:41:05.819406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.484 [2024-12-13 09:41:05.834174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.484 [2024-12-13 09:41:05.834191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.484 [2024-12-13 09:41:05.849871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.484 [2024-12-13 09:41:05.849889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:05.863413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:05.863431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:05.878022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:05.878040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:05.889072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:05.889089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:05.903491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:05.903509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:05.918142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:05.918159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:05.933389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:05.933407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:05.947391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:05.947409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:05.962209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:05.962226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:05.977355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:05.977373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:05.991224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:05.991241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:06.006118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:06.006135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:06.021586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:06.021604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:06.035097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:06.035114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:06.049694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:06.049712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:06.062552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:06.062570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:06.075839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:06.075859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:06.090602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:06.090621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.743 [2024-12-13 09:41:06.105958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.743 [2024-12-13 09:41:06.105977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.119522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.119540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.133773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.133791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.147676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.147695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.162117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.162135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.178032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.178051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.188751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.188769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.203715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.203734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.218462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.218481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.234154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.234171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.249346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.249364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.263684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.263702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.278797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.278815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.293574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.293593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.307561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.307580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.321942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.321961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.332178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.332196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.346722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.346740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.003 [2024-12-13 09:41:06.361972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.003 [2024-12-13 09:41:06.361991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.262 [2024-12-13 09:41:06.375825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.262 [2024-12-13 09:41:06.375843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.262 [2024-12-13 09:41:06.390906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.262 [2024-12-13 09:41:06.390925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.262 [2024-12-13 09:41:06.406233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.262 [2024-12-13 09:41:06.406251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.262 [2024-12-13 09:41:06.422142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.262 [2024-12-13 09:41:06.422160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.262 [2024-12-13 09:41:06.437693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.262 [2024-12-13 09:41:06.437715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.262 [2024-12-13 09:41:06.451498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.262 [2024-12-13 09:41:06.451517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.262 [2024-12-13 09:41:06.466361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.263 [2024-12-13 09:41:06.466379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.263 [2024-12-13 09:41:06.482102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.263 [2024-12-13 09:41:06.482121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.263 [2024-12-13 09:41:06.497635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.263 [2024-12-13 09:41:06.497653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.263 [2024-12-13 09:41:06.512013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.263 [2024-12-13 09:41:06.512030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.263 16830.50 IOPS, 131.49 MiB/s [2024-12-13T08:41:06.629Z] [2024-12-13 09:41:06.526663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.263 [2024-12-13 09:41:06.526681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.263 [2024-12-13 09:41:06.542231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.263 [2024-12-13 09:41:06.542249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.263 [2024-12-13 09:41:06.557885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.263 [2024-12-13 09:41:06.557903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.263 [2024-12-13 09:41:06.569794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.263 [2024-12-13 09:41:06.569812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.263 [2024-12-13 09:41:06.583602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.263 [2024-12-13 09:41:06.583620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.263 [2024-12-13 09:41:06.598530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.263 [2024-12-13 09:41:06.598549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.263 [2024-12-13 09:41:06.613927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.263 [2024-12-13 09:41:06.613946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.263 [2024-12-13 09:41:06.626696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.263 [2024-12-13 09:41:06.626715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.641869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.641887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.655520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.655538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.670258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.670274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.685735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.685754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.699234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.699251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.714139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.714155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.729523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.729545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.742588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.742607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.757599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.757617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.768778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.768796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.783932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.783949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.798819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.798837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.813171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.813189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.827692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.827710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.842478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.842495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.857843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.857860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.871254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.871272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.522 [2024-12-13 09:41:06.886596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.522 [2024-12-13 09:41:06.886613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:06.901359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:06.901376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:06.915769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:06.915786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:06.930242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:06.930259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:06.942457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:06.942474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:06.957269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:06.957286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:06.971527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:06.971544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:06.986329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:06.986345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:06.997381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:06.997403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:07.011606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:07.011624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:07.026395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:07.026413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:07.042108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:07.042124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:07.057573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:07.057591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:07.069677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:07.069694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:07.083767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:07.083785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:07.098895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:07.098914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:07.114351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:07.114368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:07.129777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:07.129795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.782 [2024-12-13 09:41:07.142049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.782 [2024-12-13 09:41:07.142066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.041 [2024-12-13 09:41:07.155605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.041 [2024-12-13 09:41:07.155624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.041 [2024-12-13 09:41:07.170028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.041 [2024-12-13 09:41:07.170046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.041 [2024-12-13 09:41:07.180717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.041 [2024-12-13 09:41:07.180735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.041 [2024-12-13 09:41:07.195718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.041 [2024-12-13 09:41:07.195735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.041 [2024-12-13 09:41:07.210918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.041 [2024-12-13 09:41:07.210937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.041 [2024-12-13 09:41:07.225953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.041 [2024-12-13 09:41:07.225971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.041 [2024-12-13 09:41:07.239253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.041 [2024-12-13 09:41:07.239270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.042 [2024-12-13 09:41:07.254098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.042 [2024-12-13 09:41:07.254116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.042 [2024-12-13 09:41:07.265282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.042 [2024-12-13 09:41:07.265306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.042 [2024-12-13 09:41:07.279658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.042 [2024-12-13 09:41:07.279676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.042 [2024-12-13 09:41:07.294234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.042 [2024-12-13 09:41:07.294252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.042 [2024-12-13 09:41:07.309162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.042 [2024-12-13 09:41:07.309180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.042 [2024-12-13 09:41:07.323778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.042 [2024-12-13 09:41:07.323796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.042 [2024-12-13 09:41:07.338265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.042 [2024-12-13 09:41:07.338282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.042 [2024-12-13 09:41:07.350468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.042 [2024-12-13 09:41:07.350485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.042 [2024-12-13 09:41:07.363599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.042 [2024-12-13 09:41:07.363616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.042 [2024-12-13 09:41:07.378506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.042 [2024-12-13 09:41:07.378523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.042 [2024-12-13 09:41:07.393333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.042 [2024-12-13 09:41:07.393352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.042 [2024-12-13 09:41:07.407932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.042 [2024-12-13 09:41:07.407950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.422374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.422391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.437719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.437737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.451376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.451395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.465500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.465518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.478349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.478368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.491292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.491310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.501504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.501521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.515795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.515814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 16824.00 IOPS, 131.44 MiB/s [2024-12-13T08:41:07.667Z] [2024-12-13 09:41:07.530602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.530621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.545836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.545855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.558218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.558235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.571905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.571923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.586798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.586816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.602374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.602393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.617529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.617547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.631520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.631538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.646534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.646552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.301 [2024-12-13 09:41:07.661848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.301 [2024-12-13 09:41:07.661867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.675776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.675794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.690591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.690610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.705619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.705638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.718016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.718034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.731602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.731621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.746477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.746495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.761806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.761825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.772941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.772959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.787619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.787637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.802738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.802756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.817184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.817202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.829947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.829965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.843781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.843799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.858160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.858179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.873788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.873807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.885789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.885807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.899424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.899442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.914000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.914018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.561 [2024-12-13 09:41:07.927580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.561 [2024-12-13 09:41:07.927598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:07.942283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:07.942300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:07.957676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:07.957694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:07.971358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:07.971376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:07.985822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:07.985850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:07.999537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:07.999555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:08.013919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:08.013937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:08.026550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:08.026567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:08.039913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:08.039931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:08.054465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:08.054482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:08.069731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:08.069749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:08.083224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:08.083242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:08.097589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:08.097607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:08.110493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:08.110511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:08.125714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:08.125733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:08.139232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:08.139249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:08.153700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:08.153718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:08.166305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:08.166322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.821 [2024-12-13 09:41:08.179312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.821 [2024-12-13 09:41:08.179330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.080 [2024-12-13 09:41:08.194186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.080 [2024-12-13 09:41:08.194202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.080 [2024-12-13 09:41:08.209779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.080 [2024-12-13 09:41:08.209797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.080 [2024-12-13 09:41:08.223569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.080 [2024-12-13 09:41:08.223587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.080 [2024-12-13 09:41:08.238165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.080 [2024-12-13 09:41:08.238182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.080 [2024-12-13 09:41:08.253252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.080 [2024-12-13 09:41:08.253269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.080 [2024-12-13 09:41:08.267415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.080 [2024-12-13 09:41:08.267432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.080 [2024-12-13 09:41:08.282098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.080 [2024-12-13 09:41:08.282115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.081 [2024-12-13 09:41:08.297535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.081 [2024-12-13 09:41:08.297553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.081 [2024-12-13 09:41:08.311672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.081 [2024-12-13 09:41:08.311689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.081 [2024-12-13 09:41:08.326099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.081 [2024-12-13 09:41:08.326121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.081 [2024-12-13 09:41:08.337370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.081 [2024-12-13 09:41:08.337387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.081 [2024-12-13 09:41:08.351724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.081 [2024-12-13 09:41:08.351742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.081 [2024-12-13 09:41:08.366240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.081 [2024-12-13 09:41:08.366258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.081 [2024-12-13 09:41:08.381331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.081 [2024-12-13 09:41:08.381349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.081 [2024-12-13 09:41:08.395969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.081 [2024-12-13 09:41:08.395985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.081 [2024-12-13 09:41:08.410648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.081 [2024-12-13 09:41:08.410665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.081 [2024-12-13 09:41:08.425026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.081 [2024-12-13 09:41:08.425044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.081 [2024-12-13 09:41:08.438917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.081 [2024-12-13 09:41:08.438935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.453601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.453619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.466370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.466387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.482123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.482140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.497445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.497468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.511445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.511468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 16858.75 IOPS, 131.71 MiB/s [2024-12-13T08:41:08.706Z] [2024-12-13 09:41:08.525817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.525834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.539224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.539241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.553960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.553977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.565881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.565898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.579673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.579690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.594948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.594970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.609747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.609765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.622327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.622345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.635229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.635247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.649857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.649874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.661150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.661168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.675588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.675606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.690120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.690138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.340 [2024-12-13 09:41:08.705651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.340 [2024-12-13 09:41:08.705670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.718641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.718658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.733227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.733244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.747533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.747551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.761764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.761782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.772355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.772373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.786985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.787003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.801711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.801730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.815095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.815112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.829759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.829776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.841208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.841226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.855954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.855978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.870519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.870537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.885821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.885840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.899361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.899378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.914459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.914477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.930109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.930128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.945532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.945550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.600 [2024-12-13 09:41:08.959901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.600 [2024-12-13 09:41:08.959920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.859 [2024-12-13 09:41:08.975209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.859 [2024-12-13 09:41:08.975230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.859 [2024-12-13 09:41:08.989739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.859 [2024-12-13 09:41:08.989757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.002737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.002755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.017288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.017307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.030895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.030914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.046427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.046445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.061948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.061965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.073101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.073119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.086973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.086992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.102564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.102582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.114608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.114626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.129851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.129869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.143622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.143641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.158420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.158439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.169040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.169058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.183464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.183482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.198132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.198149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.860 [2024-12-13 09:41:09.213640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.860 [2024-12-13 09:41:09.213658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.227782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.227802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.242327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.242345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.257524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.257543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.271893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.271911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.286494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.286512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.302094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.302112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.317538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.317556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.331508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.331527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.346176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.346194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.362380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.362398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.377550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.377568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.391492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.391511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.406236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.406253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.421659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.421678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.435331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.435349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.450079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.450096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.465959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.465977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.119 [2024-12-13 09:41:09.477104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.119 [2024-12-13 09:41:09.477122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 [2024-12-13 09:41:09.491283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.491300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 [2024-12-13 09:41:09.506105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.506122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 [2024-12-13 09:41:09.521498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.521517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 16856.60 IOPS, 131.69 MiB/s [2024-12-13T08:41:09.745Z] [2024-12-13 09:41:09.534440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.534466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 00:29:57.379 Latency(us) 00:29:57.379 [2024-12-13T08:41:09.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.379 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:29:57.379 Nvme1n1 : 5.01 16858.90 131.71 0.00 0.00 7584.60 2246.95 12670.29 00:29:57.379 [2024-12-13T08:41:09.745Z] =================================================================================================================== 00:29:57.379 [2024-12-13T08:41:09.745Z] Total : 16858.90 131.71 0.00 0.00 7584.60 2246.95 12670.29 00:29:57.379 [2024-12-13 09:41:09.545779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.545795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 [2024-12-13 09:41:09.557783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.557797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 [2024-12-13 09:41:09.569791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.569812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 [2024-12-13 09:41:09.581783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.581798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 [2024-12-13 09:41:09.593784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.593798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 [2024-12-13 09:41:09.605792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.605813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 [2024-12-13 09:41:09.617777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.617790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 [2024-12-13 09:41:09.629779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.629790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 [2024-12-13 09:41:09.641784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.641797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 [2024-12-13 09:41:09.653776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.653785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 [2024-12-13 09:41:09.665778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.665790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 [2024-12-13 09:41:09.677775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.677785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 [2024-12-13 09:41:09.689776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:57.379 [2024-12-13 09:41:09.689784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:57.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3537265) - No such process 00:29:57.379 09:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3537265 00:29:57.380 09:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.380 09:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.380 09:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:57.380 09:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.380 09:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:57.380 09:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.380 09:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:57.380 delay0 00:29:57.380 09:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.380 09:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:29:57.380 09:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.380 09:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:57.380 09:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.380 09:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:29:57.639 [2024-12-13 09:41:09.823594] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:04.205 [2024-12-13 09:41:16.237714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0440 is same with the state(6) to be set 00:30:04.205 [2024-12-13 09:41:16.237749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0440 is same with the state(6) to be set 00:30:04.205 Initializing NVMe Controllers 00:30:04.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:04.205 Initialization complete. Launching workers. 00:30:04.205 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1207 00:30:04.205 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1487, failed to submit 40 00:30:04.205 success 1379, unsuccessful 108, failed 0 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:04.205 rmmod nvme_tcp 00:30:04.205 rmmod nvme_fabrics 00:30:04.205 rmmod nvme_keyring 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3535470 ']' 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3535470 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3535470 ']' 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3535470 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3535470 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:04.205 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:04.206 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3535470' 00:30:04.206 killing process with pid 3535470 00:30:04.206 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3535470 00:30:04.206 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3535470 00:30:04.206 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:04.206 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:04.206 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:04.206 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:04.206 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:30:04.206 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:04.206 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:30:04.206 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:04.206 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:04.206 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.206 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.206 09:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:06.751 00:30:06.751 real 0m31.021s 00:30:06.751 user 0m40.873s 00:30:06.751 sys 0m11.893s 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:06.751 ************************************ 00:30:06.751 END TEST nvmf_zcopy 00:30:06.751 ************************************ 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:06.751 ************************************ 00:30:06.751 START TEST nvmf_nmic 00:30:06.751 ************************************ 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:06.751 * Looking for test storage... 00:30:06.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.751 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:06.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.752 --rc genhtml_branch_coverage=1 00:30:06.752 --rc genhtml_function_coverage=1 00:30:06.752 --rc genhtml_legend=1 00:30:06.752 --rc geninfo_all_blocks=1 00:30:06.752 --rc geninfo_unexecuted_blocks=1 00:30:06.752 00:30:06.752 ' 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:06.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.752 --rc genhtml_branch_coverage=1 00:30:06.752 --rc genhtml_function_coverage=1 00:30:06.752 --rc genhtml_legend=1 00:30:06.752 --rc geninfo_all_blocks=1 00:30:06.752 --rc geninfo_unexecuted_blocks=1 00:30:06.752 00:30:06.752 ' 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:06.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.752 --rc genhtml_branch_coverage=1 00:30:06.752 --rc genhtml_function_coverage=1 00:30:06.752 --rc genhtml_legend=1 00:30:06.752 --rc geninfo_all_blocks=1 00:30:06.752 --rc geninfo_unexecuted_blocks=1 00:30:06.752 00:30:06.752 ' 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:06.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.752 --rc genhtml_branch_coverage=1 00:30:06.752 --rc genhtml_function_coverage=1 00:30:06.752 --rc genhtml_legend=1 00:30:06.752 --rc geninfo_all_blocks=1 00:30:06.752 --rc geninfo_unexecuted_blocks=1 00:30:06.752 00:30:06.752 ' 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:06.752 09:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:12.021 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:12.021 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:12.021 Found net devices under 0000:af:00.0: cvl_0_0 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:12.021 Found net devices under 0000:af:00.1: cvl_0_1 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:30:12.021 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:12.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:30:12.022 00:30:12.022 --- 10.0.0.2 ping statistics --- 00:30:12.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.022 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:30:12.022 00:30:12.022 --- 10.0.0.1 ping statistics --- 00:30:12.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.022 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3542505 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3542505 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3542505 ']' 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.022 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:12.281 [2024-12-13 09:41:24.418657] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:12.281 [2024-12-13 09:41:24.419635] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:30:12.281 [2024-12-13 09:41:24.419675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.281 [2024-12-13 09:41:24.488013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:12.281 [2024-12-13 09:41:24.533262] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.281 [2024-12-13 09:41:24.533294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.281 [2024-12-13 09:41:24.533302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.281 [2024-12-13 09:41:24.533307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.281 [2024-12-13 09:41:24.533312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.281 [2024-12-13 09:41:24.534596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.281 [2024-12-13 09:41:24.534610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.281 [2024-12-13 09:41:24.534703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:12.281 [2024-12-13 09:41:24.534704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.281 [2024-12-13 09:41:24.603886] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:12.281 [2024-12-13 09:41:24.604063] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:12.281 [2024-12-13 09:41:24.604162] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:12.282 [2024-12-13 09:41:24.604297] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:12.282 [2024-12-13 09:41:24.604478] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:12.282 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.282 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:30:12.282 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:12.282 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:12.282 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:12.541 [2024-12-13 09:41:24.671414] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:12.541 Malloc0 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:12.541 [2024-12-13 09:41:24.739382] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:30:12.541 test case1: single bdev can't be used in multiple subsystems 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:12.541 [2024-12-13 09:41:24.763108] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:30:12.541 [2024-12-13 09:41:24.763129] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:30:12.541 [2024-12-13 09:41:24.763137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:12.541 request: 00:30:12.541 { 00:30:12.541 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:30:12.541 "namespace": { 00:30:12.541 "bdev_name": "Malloc0", 00:30:12.541 "no_auto_visible": false, 00:30:12.541 "hide_metadata": false 00:30:12.541 }, 00:30:12.541 "method": "nvmf_subsystem_add_ns", 00:30:12.541 "req_id": 1 00:30:12.541 } 00:30:12.541 Got JSON-RPC error response 00:30:12.541 response: 00:30:12.541 { 00:30:12.541 "code": -32602, 00:30:12.541 "message": "Invalid parameters" 00:30:12.541 } 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:30:12.541 Adding namespace failed - expected result. 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:30:12.541 test case2: host connect to nvmf target in multiple paths 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:12.541 [2024-12-13 09:41:24.775200] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.541 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:12.800 09:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:30:13.058 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:13.058 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:30:13.058 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:13.058 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:13.058 09:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:30:14.960 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:14.960 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:14.960 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:14.960 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:14.960 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:14.960 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:30:14.960 09:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:14.960 [global] 00:30:14.960 thread=1 00:30:14.960 invalidate=1 00:30:14.960 rw=write 00:30:14.960 time_based=1 00:30:14.960 runtime=1 00:30:14.960 ioengine=libaio 00:30:14.960 direct=1 00:30:14.960 bs=4096 00:30:14.960 iodepth=1 00:30:14.960 norandommap=0 00:30:14.960 numjobs=1 00:30:14.960 00:30:14.960 verify_dump=1 00:30:14.960 verify_backlog=512 00:30:14.960 verify_state_save=0 00:30:14.960 do_verify=1 00:30:14.960 verify=crc32c-intel 00:30:14.960 [job0] 00:30:14.960 filename=/dev/nvme0n1 00:30:15.218 Could not set queue depth (nvme0n1) 00:30:15.476 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:15.476 fio-3.35 00:30:15.476 Starting 1 thread 00:30:16.412 00:30:16.412 job0: (groupid=0, jobs=1): err= 0: pid=3543221: Fri Dec 13 09:41:28 2024 00:30:16.412 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:30:16.412 slat (nsec): min=9848, max=24578, avg=22313.32, stdev=2818.95 00:30:16.412 clat (usec): min=40898, max=41946, avg=41058.84, stdev=282.44 00:30:16.412 lat (usec): min=40921, max=41970, avg=41081.15, stdev=282.60 00:30:16.412 clat percentiles (usec): 00:30:16.412 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:16.412 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:16.412 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:30:16.412 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:16.412 | 99.99th=[42206] 00:30:16.412 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:30:16.412 slat (usec): min=10, max=26781, avg=63.39, stdev=1183.08 00:30:16.412 clat (usec): min=132, max=322, avg=144.77, stdev=11.50 00:30:16.412 lat (usec): min=143, max=27040, avg=208.15, stdev=1188.19 00:30:16.412 clat percentiles (usec): 00:30:16.412 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 139], 20.00th=[ 141], 00:30:16.412 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 143], 60.00th=[ 145], 00:30:16.412 | 70.00th=[ 147], 80.00th=[ 149], 90.00th=[ 151], 95.00th=[ 155], 00:30:16.412 | 99.00th=[ 174], 99.50th=[ 227], 99.90th=[ 322], 99.95th=[ 322], 00:30:16.412 | 99.99th=[ 322] 00:30:16.412 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:30:16.412 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:16.412 lat (usec) : 250=95.51%, 500=0.37% 00:30:16.412 lat (msec) : 50=4.12% 00:30:16.412 cpu : usr=0.40%, sys=0.49%, ctx=539, majf=0, minf=1 00:30:16.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:16.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.413 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:16.413 00:30:16.413 Run status group 0 (all jobs): 00:30:16.413 READ: bw=87.0KiB/s (89.0kB/s), 87.0KiB/s-87.0KiB/s (89.0kB/s-89.0kB/s), io=88.0KiB (90.1kB), run=1012-1012msec 00:30:16.413 WRITE: bw=2024KiB/s (2072kB/s), 2024KiB/s-2024KiB/s (2072kB/s-2072kB/s), io=2048KiB (2097kB), run=1012-1012msec 00:30:16.413 00:30:16.413 Disk stats (read/write): 00:30:16.413 nvme0n1: ios=45/512, merge=0/0, ticks=1765/70, in_queue=1835, util=98.40% 00:30:16.413 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:16.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:16.671 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:16.671 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:30:16.671 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:16.672 rmmod nvme_tcp 00:30:16.672 rmmod nvme_fabrics 00:30:16.672 rmmod nvme_keyring 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3542505 ']' 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3542505 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3542505 ']' 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3542505 00:30:16.672 09:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:30:16.672 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:16.672 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3542505 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3542505' 00:30:16.931 killing process with pid 3542505 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3542505 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3542505 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.931 09:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:19.467 00:30:19.467 real 0m12.654s 00:30:19.467 user 0m24.278s 00:30:19.467 sys 0m5.681s 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:19.467 ************************************ 00:30:19.467 END TEST nvmf_nmic 00:30:19.467 ************************************ 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:19.467 ************************************ 00:30:19.467 START TEST nvmf_fio_target 00:30:19.467 ************************************ 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:19.467 * Looking for test storage... 00:30:19.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:19.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.467 --rc genhtml_branch_coverage=1 00:30:19.467 --rc genhtml_function_coverage=1 00:30:19.467 --rc genhtml_legend=1 00:30:19.467 --rc geninfo_all_blocks=1 00:30:19.467 --rc geninfo_unexecuted_blocks=1 00:30:19.467 00:30:19.467 ' 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:19.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.467 --rc genhtml_branch_coverage=1 00:30:19.467 --rc genhtml_function_coverage=1 00:30:19.467 --rc genhtml_legend=1 00:30:19.467 --rc geninfo_all_blocks=1 00:30:19.467 --rc geninfo_unexecuted_blocks=1 00:30:19.467 00:30:19.467 ' 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:19.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.467 --rc genhtml_branch_coverage=1 00:30:19.467 --rc genhtml_function_coverage=1 00:30:19.467 --rc genhtml_legend=1 00:30:19.467 --rc geninfo_all_blocks=1 00:30:19.467 --rc geninfo_unexecuted_blocks=1 00:30:19.467 00:30:19.467 ' 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:19.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.467 --rc genhtml_branch_coverage=1 00:30:19.467 --rc genhtml_function_coverage=1 00:30:19.467 --rc genhtml_legend=1 00:30:19.467 --rc geninfo_all_blocks=1 00:30:19.467 --rc geninfo_unexecuted_blocks=1 00:30:19.467 00:30:19.467 ' 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.467 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:19.468 09:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:24.810 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:24.810 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:24.810 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:24.810 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:24.810 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:24.810 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:24.810 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:24.810 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:24.810 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:24.810 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:30:24.810 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:24.810 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:30:24.810 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:24.810 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:24.811 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:24.811 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:24.811 Found net devices under 0000:af:00.0: cvl_0_0 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:24.811 Found net devices under 0000:af:00.1: cvl_0_1 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:24.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:24.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:30:24.811 00:30:24.811 --- 10.0.0.2 ping statistics --- 00:30:24.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.811 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:24.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:24.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:30:24.811 00:30:24.811 --- 10.0.0.1 ping statistics --- 00:30:24.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.811 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:24.811 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:24.812 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:24.812 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:24.812 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:24.812 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:24.812 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:24.812 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:24.812 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3546801 00:30:24.812 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:24.812 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3546801 00:30:24.812 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3546801 ']' 00:30:24.812 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.812 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:24.812 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.812 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:24.812 09:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:24.812 [2024-12-13 09:41:36.929177] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:24.812 [2024-12-13 09:41:36.930087] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:30:24.812 [2024-12-13 09:41:36.930120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.812 [2024-12-13 09:41:36.997509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:24.812 [2024-12-13 09:41:37.039273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.812 [2024-12-13 09:41:37.039306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.812 [2024-12-13 09:41:37.039312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:24.812 [2024-12-13 09:41:37.039318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:24.812 [2024-12-13 09:41:37.039323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.812 [2024-12-13 09:41:37.040635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.812 [2024-12-13 09:41:37.040734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:24.812 [2024-12-13 09:41:37.040813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:24.812 [2024-12-13 09:41:37.040815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.812 [2024-12-13 09:41:37.108103] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:24.812 [2024-12-13 09:41:37.108296] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:24.812 [2024-12-13 09:41:37.108394] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:24.812 [2024-12-13 09:41:37.108554] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:24.812 [2024-12-13 09:41:37.108718] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:24.812 09:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.812 09:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:30:24.812 09:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:24.812 09:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:24.812 09:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:24.812 09:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.812 09:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:25.071 [2024-12-13 09:41:37.345312] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.071 09:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:25.330 09:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:25.330 09:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:25.588 09:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:25.588 09:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:25.847 09:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:25.847 09:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:26.106 09:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:26.106 09:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:26.106 09:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:26.365 09:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:26.365 09:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:26.623 09:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:26.623 09:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:26.882 09:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:26.882 09:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:26.882 09:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:27.141 09:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:27.142 09:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:27.400 09:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:27.400 09:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:27.658 09:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:27.658 [2024-12-13 09:41:39.949424] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.658 09:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:27.917 09:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:28.175 09:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:28.434 09:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:28.434 09:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:30:28.434 09:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:28.434 09:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:30:28.434 09:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:30:28.434 09:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:30:30.334 09:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:30.334 09:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:30.334 09:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:30.334 09:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:30:30.334 09:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:30.334 09:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:30:30.334 09:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:30.334 [global] 00:30:30.334 thread=1 00:30:30.334 invalidate=1 00:30:30.334 rw=write 00:30:30.334 time_based=1 00:30:30.334 runtime=1 00:30:30.334 ioengine=libaio 00:30:30.334 direct=1 00:30:30.334 bs=4096 00:30:30.334 iodepth=1 00:30:30.334 norandommap=0 00:30:30.334 numjobs=1 00:30:30.334 00:30:30.334 verify_dump=1 00:30:30.334 verify_backlog=512 00:30:30.334 verify_state_save=0 00:30:30.334 do_verify=1 00:30:30.334 verify=crc32c-intel 00:30:30.334 [job0] 00:30:30.334 filename=/dev/nvme0n1 00:30:30.334 [job1] 00:30:30.334 filename=/dev/nvme0n2 00:30:30.334 [job2] 00:30:30.334 filename=/dev/nvme0n3 00:30:30.334 [job3] 00:30:30.334 filename=/dev/nvme0n4 00:30:30.592 Could not set queue depth (nvme0n1) 00:30:30.592 Could not set queue depth (nvme0n2) 00:30:30.592 Could not set queue depth (nvme0n3) 00:30:30.592 Could not set queue depth (nvme0n4) 00:30:30.850 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:30.850 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:30.850 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:30.850 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:30.850 fio-3.35 00:30:30.850 Starting 4 threads 00:30:32.235 00:30:32.235 job0: (groupid=0, jobs=1): err= 0: pid=3547899: Fri Dec 13 09:41:44 2024 00:30:32.235 read: IOPS=21, BW=84.6KiB/s (86.6kB/s)(88.0KiB/1040msec) 00:30:32.235 slat (nsec): min=10617, max=24159, avg=22635.95, stdev=2702.69 00:30:32.235 clat (usec): min=40848, max=41973, avg=41021.58, stdev=217.53 00:30:32.235 lat (usec): min=40871, max=41996, avg=41044.22, stdev=217.67 00:30:32.235 clat percentiles (usec): 00:30:32.235 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:32.235 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:32.235 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:32.235 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:32.235 | 99.99th=[42206] 00:30:32.235 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:30:32.235 slat (usec): min=9, max=41817, avg=92.53, stdev=1847.60 00:30:32.235 clat (usec): min=131, max=675, avg=172.54, stdev=40.42 00:30:32.235 lat (usec): min=142, max=42035, avg=265.07, stdev=1850.09 00:30:32.235 clat percentiles (usec): 00:30:32.235 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 143], 00:30:32.235 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 174], 00:30:32.235 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 206], 95.00th=[ 225], 00:30:32.235 | 99.00th=[ 285], 99.50th=[ 408], 99.90th=[ 676], 99.95th=[ 676], 00:30:32.235 | 99.99th=[ 676] 00:30:32.235 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:30:32.235 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:32.235 lat (usec) : 250=94.19%, 500=1.31%, 750=0.37% 00:30:32.235 lat (msec) : 50=4.12% 00:30:32.235 cpu : usr=0.10%, sys=0.67%, ctx=538, majf=0, minf=1 00:30:32.235 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:32.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.235 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.235 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:32.235 job1: (groupid=0, jobs=1): err= 0: pid=3547901: Fri Dec 13 09:41:44 2024 00:30:32.235 read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec) 00:30:32.235 slat (nsec): min=10160, max=23702, avg=22165.50, stdev=2704.44 00:30:32.235 clat (usec): min=40439, max=41045, avg=40947.56, stdev=117.22 00:30:32.235 lat (usec): min=40449, max=41067, avg=40969.72, stdev=119.79 00:30:32.235 clat percentiles (usec): 00:30:32.235 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:30:32.235 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:32.235 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:32.235 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:32.235 | 99.99th=[41157] 00:30:32.235 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:30:32.235 slat (nsec): min=10795, max=38604, avg=12432.31, stdev=2546.78 00:30:32.235 clat (usec): min=150, max=339, avg=217.67, stdev=34.93 00:30:32.235 lat (usec): min=162, max=351, avg=230.10, stdev=35.01 00:30:32.235 clat percentiles (usec): 00:30:32.235 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:30:32.235 | 30.00th=[ 190], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 241], 00:30:32.235 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 243], 95.00th=[ 245], 00:30:32.235 | 99.00th=[ 251], 99.50th=[ 258], 99.90th=[ 338], 99.95th=[ 338], 00:30:32.235 | 99.99th=[ 338] 00:30:32.235 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:30:32.235 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:32.235 lat (usec) : 250=94.19%, 500=1.69% 00:30:32.235 lat (msec) : 50=4.12% 00:30:32.235 cpu : usr=0.59%, sys=0.78%, ctx=535, majf=0, minf=1 00:30:32.235 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:32.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.235 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.235 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:32.235 job2: (groupid=0, jobs=1): err= 0: pid=3547920: Fri Dec 13 09:41:44 2024 00:30:32.235 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:30:32.235 slat (nsec): min=10409, max=23406, avg=22196.64, stdev=2649.97 00:30:32.235 clat (usec): min=40891, max=41992, avg=41025.50, stdev=218.88 00:30:32.235 lat (usec): min=40914, max=42014, avg=41047.69, stdev=218.79 00:30:32.235 clat percentiles (usec): 00:30:32.235 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:32.235 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:32.235 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:32.235 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:32.235 | 99.99th=[42206] 00:30:32.235 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:30:32.235 slat (nsec): min=9536, max=42138, avg=11231.52, stdev=2294.67 00:30:32.235 clat (usec): min=147, max=488, avg=195.16, stdev=44.30 00:30:32.235 lat (usec): min=159, max=499, avg=206.39, stdev=45.07 00:30:32.235 clat percentiles (usec): 00:30:32.235 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:30:32.235 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:30:32.235 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 235], 95.00th=[ 297], 00:30:32.236 | 99.00th=[ 388], 99.50th=[ 416], 99.90th=[ 490], 99.95th=[ 490], 00:30:32.236 | 99.99th=[ 490] 00:30:32.236 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:30:32.236 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:32.236 lat (usec) : 250=88.39%, 500=7.49% 00:30:32.236 lat (msec) : 50=4.12% 00:30:32.236 cpu : usr=0.30%, sys=0.50%, ctx=534, majf=0, minf=2 00:30:32.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:32.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.236 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.236 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:32.236 job3: (groupid=0, jobs=1): err= 0: pid=3547930: Fri Dec 13 09:41:44 2024 00:30:32.236 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:30:32.236 slat (nsec): min=10876, max=25828, avg=23626.00, stdev=2956.95 00:30:32.236 clat (usec): min=40570, max=41980, avg=41040.12, stdev=315.84 00:30:32.236 lat (usec): min=40580, max=42005, avg=41063.74, stdev=316.88 00:30:32.236 clat percentiles (usec): 00:30:32.236 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:32.236 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:32.236 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:30:32.236 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:32.236 | 99.99th=[42206] 00:30:32.236 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:30:32.236 slat (nsec): min=11312, max=40996, avg=13149.31, stdev=2290.59 00:30:32.236 clat (usec): min=142, max=284, avg=183.53, stdev=16.98 00:30:32.236 lat (usec): min=153, max=325, avg=196.68, stdev=17.57 00:30:32.236 clat percentiles (usec): 00:30:32.236 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:30:32.236 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:30:32.236 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 217], 00:30:32.236 | 99.00th=[ 233], 99.50th=[ 262], 99.90th=[ 285], 99.95th=[ 285], 00:30:32.236 | 99.99th=[ 285] 00:30:32.236 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:30:32.236 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:32.236 lat (usec) : 250=95.13%, 500=0.75% 00:30:32.236 lat (msec) : 50=4.12% 00:30:32.236 cpu : usr=0.50%, sys=0.90%, ctx=535, majf=0, minf=1 00:30:32.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:32.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.236 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.236 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:32.236 00:30:32.236 Run status group 0 (all jobs): 00:30:32.236 READ: bw=338KiB/s (347kB/s), 84.6KiB/s-87.5KiB/s (86.6kB/s-89.6kB/s), io=352KiB (360kB), run=1006-1040msec 00:30:32.236 WRITE: bw=7877KiB/s (8066kB/s), 1969KiB/s-2036KiB/s (2016kB/s-2085kB/s), io=8192KiB (8389kB), run=1006-1040msec 00:30:32.236 00:30:32.236 Disk stats (read/write): 00:30:32.236 nvme0n1: ios=69/512, merge=0/0, ticks=1123/88, in_queue=1211, util=97.39% 00:30:32.236 nvme0n2: ios=40/512, merge=0/0, ticks=1640/107, in_queue=1747, util=97.63% 00:30:32.236 nvme0n3: ios=17/512, merge=0/0, ticks=698/96, in_queue=794, util=87.61% 00:30:32.236 nvme0n4: ios=39/512, merge=0/0, ticks=1601/83, in_queue=1684, util=97.46% 00:30:32.236 09:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:32.236 [global] 00:30:32.236 thread=1 00:30:32.236 invalidate=1 00:30:32.236 rw=randwrite 00:30:32.236 time_based=1 00:30:32.236 runtime=1 00:30:32.236 ioengine=libaio 00:30:32.236 direct=1 00:30:32.236 bs=4096 00:30:32.236 iodepth=1 00:30:32.236 norandommap=0 00:30:32.236 numjobs=1 00:30:32.236 00:30:32.236 verify_dump=1 00:30:32.236 verify_backlog=512 00:30:32.236 verify_state_save=0 00:30:32.236 do_verify=1 00:30:32.236 verify=crc32c-intel 00:30:32.236 [job0] 00:30:32.236 filename=/dev/nvme0n1 00:30:32.236 [job1] 00:30:32.236 filename=/dev/nvme0n2 00:30:32.236 [job2] 00:30:32.236 filename=/dev/nvme0n3 00:30:32.236 [job3] 00:30:32.236 filename=/dev/nvme0n4 00:30:32.236 Could not set queue depth (nvme0n1) 00:30:32.236 Could not set queue depth (nvme0n2) 00:30:32.236 Could not set queue depth (nvme0n3) 00:30:32.236 Could not set queue depth (nvme0n4) 00:30:32.494 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:32.494 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:32.494 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:32.494 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:32.494 fio-3.35 00:30:32.494 Starting 4 threads 00:30:33.871 00:30:33.871 job0: (groupid=0, jobs=1): err= 0: pid=3548348: Fri Dec 13 09:41:45 2024 00:30:33.871 read: IOPS=2368, BW=9475KiB/s (9702kB/s)(9484KiB/1001msec) 00:30:33.871 slat (nsec): min=6339, max=26541, avg=7323.01, stdev=912.19 00:30:33.871 clat (usec): min=202, max=507, avg=235.12, stdev=22.96 00:30:33.871 lat (usec): min=209, max=514, avg=242.44, stdev=22.99 00:30:33.871 clat percentiles (usec): 00:30:33.871 | 1.00th=[ 208], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 215], 00:30:33.871 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 241], 60.00th=[ 245], 00:30:33.871 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 255], 00:30:33.871 | 99.00th=[ 293], 99.50th=[ 338], 99.90th=[ 502], 99.95th=[ 502], 00:30:33.871 | 99.99th=[ 506] 00:30:33.871 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:30:33.871 slat (nsec): min=9106, max=37264, avg=10350.68, stdev=1350.30 00:30:33.871 clat (usec): min=120, max=376, avg=151.39, stdev=25.35 00:30:33.871 lat (usec): min=130, max=407, avg=161.74, stdev=25.51 00:30:33.871 clat percentiles (usec): 00:30:33.871 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 135], 00:30:33.871 | 30.00th=[ 137], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 149], 00:30:33.871 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 186], 95.00th=[ 210], 00:30:33.871 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 285], 99.95th=[ 338], 00:30:33.871 | 99.99th=[ 375] 00:30:33.871 bw ( KiB/s): min=11392, max=11392, per=63.10%, avg=11392.00, stdev= 0.00, samples=1 00:30:33.871 iops : min= 2848, max= 2848, avg=2848.00, stdev= 0.00, samples=1 00:30:33.871 lat (usec) : 250=92.29%, 500=7.65%, 750=0.06% 00:30:33.871 cpu : usr=2.40%, sys=4.50%, ctx=4933, majf=0, minf=1 00:30:33.871 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:33.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.871 issued rwts: total=2371,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.871 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:33.871 job1: (groupid=0, jobs=1): err= 0: pid=3548363: Fri Dec 13 09:41:45 2024 00:30:33.871 read: IOPS=370, BW=1481KiB/s (1516kB/s)(1512KiB/1021msec) 00:30:33.871 slat (nsec): min=5211, max=28419, avg=8102.18, stdev=3126.42 00:30:33.871 clat (usec): min=210, max=42196, avg=2338.53, stdev=8930.71 00:30:33.871 lat (usec): min=217, max=42204, avg=2346.63, stdev=8933.08 00:30:33.871 clat percentiles (usec): 00:30:33.871 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 245], 20.00th=[ 260], 00:30:33.871 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:30:33.871 | 70.00th=[ 297], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[41157], 00:30:33.871 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:30:33.871 | 99.99th=[42206] 00:30:33.871 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:30:33.871 slat (nsec): min=8727, max=38421, avg=9734.89, stdev=1506.64 00:30:33.871 clat (usec): min=215, max=355, avg=247.63, stdev=12.02 00:30:33.871 lat (usec): min=228, max=367, avg=257.36, stdev=12.42 00:30:33.871 clat percentiles (usec): 00:30:33.871 | 1.00th=[ 235], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 241], 00:30:33.871 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 245], 00:30:33.871 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 277], 00:30:33.871 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 355], 99.95th=[ 355], 00:30:33.871 | 99.99th=[ 355] 00:30:33.871 bw ( KiB/s): min= 4096, max= 4096, per=22.69%, avg=4096.00, stdev= 0.00, samples=1 00:30:33.871 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:33.871 lat (usec) : 250=51.69%, 500=46.07%, 750=0.11% 00:30:33.871 lat (msec) : 50=2.13% 00:30:33.871 cpu : usr=0.29%, sys=0.88%, ctx=890, majf=0, minf=2 00:30:33.871 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:33.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.871 issued rwts: total=378,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.871 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:33.871 job2: (groupid=0, jobs=1): err= 0: pid=3548374: Fri Dec 13 09:41:45 2024 00:30:33.871 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:30:33.871 slat (nsec): min=11505, max=23624, avg=22631.32, stdev=2496.91 00:30:33.871 clat (usec): min=40932, max=41925, avg=41029.94, stdev=214.74 00:30:33.871 lat (usec): min=40955, max=41948, avg=41052.57, stdev=214.05 00:30:33.871 clat percentiles (usec): 00:30:33.871 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:33.871 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:33.871 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:33.871 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:30:33.871 | 99.99th=[41681] 00:30:33.871 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:30:33.871 slat (nsec): min=10755, max=41445, avg=11970.64, stdev=2049.16 00:30:33.871 clat (usec): min=159, max=452, avg=182.47, stdev=17.51 00:30:33.871 lat (usec): min=170, max=465, avg=194.44, stdev=18.19 00:30:33.871 clat percentiles (usec): 00:30:33.871 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 174], 00:30:33.871 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 184], 00:30:33.871 | 70.00th=[ 186], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:30:33.871 | 99.00th=[ 229], 99.50th=[ 297], 99.90th=[ 453], 99.95th=[ 453], 00:30:33.871 | 99.99th=[ 453] 00:30:33.871 bw ( KiB/s): min= 4096, max= 4096, per=22.69%, avg=4096.00, stdev= 0.00, samples=1 00:30:33.871 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:33.871 lat (usec) : 250=95.13%, 500=0.75% 00:30:33.871 lat (msec) : 50=4.12% 00:30:33.871 cpu : usr=0.10%, sys=0.80%, ctx=534, majf=0, minf=2 00:30:33.871 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:33.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.871 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.871 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:33.871 job3: (groupid=0, jobs=1): err= 0: pid=3548379: Fri Dec 13 09:41:45 2024 00:30:33.871 read: IOPS=868, BW=3475KiB/s (3558kB/s)(3548KiB/1021msec) 00:30:33.871 slat (nsec): min=5758, max=23245, avg=7546.10, stdev=1442.89 00:30:33.871 clat (usec): min=220, max=41995, avg=915.21, stdev=5101.72 00:30:33.871 lat (usec): min=228, max=42006, avg=922.75, stdev=5102.53 00:30:33.871 clat percentiles (usec): 00:30:33.871 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 241], 00:30:33.871 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:30:33.871 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 314], 95.00th=[ 334], 00:30:33.871 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:30:33.871 | 99.99th=[42206] 00:30:33.872 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:30:33.872 slat (nsec): min=8876, max=49958, avg=10180.45, stdev=1830.39 00:30:33.872 clat (usec): min=139, max=291, avg=183.21, stdev=16.39 00:30:33.872 lat (usec): min=151, max=301, avg=193.39, stdev=16.53 00:30:33.872 clat percentiles (usec): 00:30:33.872 | 1.00th=[ 151], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:30:33.872 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:30:33.872 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:30:33.872 | 99.00th=[ 245], 99.50th=[ 247], 99.90th=[ 285], 99.95th=[ 293], 00:30:33.872 | 99.99th=[ 293] 00:30:33.872 bw ( KiB/s): min= 8192, max= 8192, per=45.38%, avg=8192.00, stdev= 0.00, samples=1 00:30:33.872 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:33.872 lat (usec) : 250=75.88%, 500=23.29%, 750=0.05% 00:30:33.872 lat (msec) : 10=0.05%, 50=0.73% 00:30:33.872 cpu : usr=1.08%, sys=1.57%, ctx=1911, majf=0, minf=1 00:30:33.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:33.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.872 issued rwts: total=887,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:33.872 00:30:33.872 Run status group 0 (all jobs): 00:30:33.872 READ: bw=14.0MiB/s (14.7MB/s), 87.6KiB/s-9475KiB/s (89.8kB/s-9702kB/s), io=14.3MiB (15.0MB), run=1001-1021msec 00:30:33.872 WRITE: bw=17.6MiB/s (18.5MB/s), 2006KiB/s-9.99MiB/s (2054kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1021msec 00:30:33.872 00:30:33.872 Disk stats (read/write): 00:30:33.872 nvme0n1: ios=2072/2048, merge=0/0, ticks=1428/305, in_queue=1733, util=98.10% 00:30:33.872 nvme0n2: ios=373/512, merge=0/0, ticks=679/123, in_queue=802, util=86.46% 00:30:33.872 nvme0n3: ios=74/512, merge=0/0, ticks=898/85, in_queue=983, util=94.87% 00:30:33.872 nvme0n4: ios=905/1024, merge=0/0, ticks=695/187, in_queue=882, util=90.19% 00:30:33.872 09:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:30:33.872 [global] 00:30:33.872 thread=1 00:30:33.872 invalidate=1 00:30:33.872 rw=write 00:30:33.872 time_based=1 00:30:33.872 runtime=1 00:30:33.872 ioengine=libaio 00:30:33.872 direct=1 00:30:33.872 bs=4096 00:30:33.872 iodepth=128 00:30:33.872 norandommap=0 00:30:33.872 numjobs=1 00:30:33.872 00:30:33.872 verify_dump=1 00:30:33.872 verify_backlog=512 00:30:33.872 verify_state_save=0 00:30:33.872 do_verify=1 00:30:33.872 verify=crc32c-intel 00:30:33.872 [job0] 00:30:33.872 filename=/dev/nvme0n1 00:30:33.872 [job1] 00:30:33.872 filename=/dev/nvme0n2 00:30:33.872 [job2] 00:30:33.872 filename=/dev/nvme0n3 00:30:33.872 [job3] 00:30:33.872 filename=/dev/nvme0n4 00:30:33.872 Could not set queue depth (nvme0n1) 00:30:33.872 Could not set queue depth (nvme0n2) 00:30:33.872 Could not set queue depth (nvme0n3) 00:30:33.872 Could not set queue depth (nvme0n4) 00:30:33.872 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:33.872 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:33.872 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:33.872 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:33.872 fio-3.35 00:30:33.872 Starting 4 threads 00:30:35.249 00:30:35.249 job0: (groupid=0, jobs=1): err= 0: pid=3548800: Fri Dec 13 09:41:47 2024 00:30:35.249 read: IOPS=5787, BW=22.6MiB/s (23.7MB/s)(22.7MiB/1003msec) 00:30:35.249 slat (nsec): min=1289, max=5285.5k, avg=75678.23, stdev=398616.15 00:30:35.249 clat (usec): min=1891, max=52144, avg=9753.15, stdev=2501.52 00:30:35.249 lat (usec): min=4669, max=52146, avg=9828.83, stdev=2506.34 00:30:35.249 clat percentiles (usec): 00:30:35.249 | 1.00th=[ 5276], 5.00th=[ 7504], 10.00th=[ 7963], 20.00th=[ 8586], 00:30:35.249 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:30:35.249 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11338], 95.00th=[12125], 00:30:35.249 | 99.00th=[13698], 99.50th=[15533], 99.90th=[48497], 99.95th=[48497], 00:30:35.249 | 99.99th=[52167] 00:30:35.249 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:30:35.249 slat (usec): min=2, max=21434, avg=85.85, stdev=538.09 00:30:35.249 clat (usec): min=5255, max=44123, avg=10983.81, stdev=5492.76 00:30:35.249 lat (usec): min=5266, max=44137, avg=11069.66, stdev=5533.64 00:30:35.249 clat percentiles (usec): 00:30:35.249 | 1.00th=[ 5932], 5.00th=[ 7701], 10.00th=[ 8094], 20.00th=[ 8291], 00:30:35.249 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9896], 00:30:35.249 | 70.00th=[10552], 80.00th=[11207], 90.00th=[15664], 95.00th=[22152], 00:30:35.249 | 99.00th=[42730], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:30:35.249 | 99.99th=[44303] 00:30:35.249 bw ( KiB/s): min=21552, max=27600, per=37.88%, avg=24576.00, stdev=4276.58, samples=2 00:30:35.249 iops : min= 5388, max= 6900, avg=6144.00, stdev=1069.15, samples=2 00:30:35.249 lat (msec) : 2=0.01%, 10=61.98%, 20=34.81%, 50=3.20%, 100=0.01% 00:30:35.249 cpu : usr=4.99%, sys=6.69%, ctx=646, majf=0, minf=1 00:30:35.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:30:35.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:35.249 issued rwts: total=5805,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:35.249 job1: (groupid=0, jobs=1): err= 0: pid=3548812: Fri Dec 13 09:41:47 2024 00:30:35.249 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:30:35.249 slat (nsec): min=1183, max=18064k, avg=116528.36, stdev=852600.86 00:30:35.249 clat (usec): min=2459, max=44530, avg=14090.84, stdev=7230.77 00:30:35.249 lat (usec): min=2464, max=44537, avg=14207.37, stdev=7289.16 00:30:35.249 clat percentiles (usec): 00:30:35.249 | 1.00th=[ 5604], 5.00th=[ 7701], 10.00th=[ 8717], 20.00th=[ 9241], 00:30:35.249 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[11600], 60.00th=[13829], 00:30:35.249 | 70.00th=[15795], 80.00th=[16909], 90.00th=[22676], 95.00th=[33162], 00:30:35.249 | 99.00th=[39584], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:30:35.249 | 99.99th=[44303] 00:30:35.249 write: IOPS=4005, BW=15.6MiB/s (16.4MB/s)(15.8MiB/1010msec); 0 zone resets 00:30:35.249 slat (usec): min=2, max=12870, avg=136.78, stdev=675.63 00:30:35.249 clat (usec): min=1692, max=85045, avg=19142.05, stdev=14468.85 00:30:35.249 lat (usec): min=1700, max=85056, avg=19278.83, stdev=14559.00 00:30:35.249 clat percentiles (usec): 00:30:35.249 | 1.00th=[ 5211], 5.00th=[ 7635], 10.00th=[ 7963], 20.00th=[ 8291], 00:30:35.249 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[12125], 60.00th=[15795], 00:30:35.249 | 70.00th=[25035], 80.00th=[30802], 90.00th=[38536], 95.00th=[42730], 00:30:35.249 | 99.00th=[78119], 99.50th=[82314], 99.90th=[85459], 99.95th=[85459], 00:30:35.249 | 99.99th=[85459] 00:30:35.249 bw ( KiB/s): min=14960, max=16384, per=24.15%, avg=15672.00, stdev=1006.92, samples=2 00:30:35.249 iops : min= 3740, max= 4096, avg=3918.00, stdev=251.73, samples=2 00:30:35.249 lat (msec) : 2=0.05%, 4=0.58%, 10=35.07%, 20=39.84%, 50=22.79% 00:30:35.249 lat (msec) : 100=1.66% 00:30:35.249 cpu : usr=2.68%, sys=4.86%, ctx=447, majf=0, minf=2 00:30:35.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:35.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:35.249 issued rwts: total=3584,4046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:35.249 job2: (groupid=0, jobs=1): err= 0: pid=3548828: Fri Dec 13 09:41:47 2024 00:30:35.249 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:30:35.249 slat (nsec): min=1462, max=24638k, avg=124478.32, stdev=1102167.27 00:30:35.249 clat (usec): min=5137, max=46839, avg=15955.00, stdev=7136.97 00:30:35.249 lat (usec): min=5149, max=46864, avg=16079.48, stdev=7235.99 00:30:35.249 clat percentiles (usec): 00:30:35.249 | 1.00th=[ 7177], 5.00th=[ 9241], 10.00th=[ 9372], 20.00th=[10028], 00:30:35.249 | 30.00th=[10552], 40.00th=[10814], 50.00th=[14484], 60.00th=[15795], 00:30:35.249 | 70.00th=[17433], 80.00th=[22152], 90.00th=[26346], 95.00th=[28181], 00:30:35.249 | 99.00th=[39060], 99.50th=[39060], 99.90th=[43254], 99.95th=[44827], 00:30:35.249 | 99.99th=[46924] 00:30:35.249 write: IOPS=3292, BW=12.9MiB/s (13.5MB/s)(13.1MiB/1015msec); 0 zone resets 00:30:35.249 slat (usec): min=2, max=19595, avg=177.61, stdev=1112.36 00:30:35.249 clat (msec): min=3, max=148, avg=23.85, stdev=26.79 00:30:35.249 lat (msec): min=3, max=148, avg=24.03, stdev=26.96 00:30:35.249 clat percentiles (msec): 00:30:35.249 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:30:35.249 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 16], 60.00th=[ 18], 00:30:35.249 | 70.00th=[ 22], 80.00th=[ 26], 90.00th=[ 56], 95.00th=[ 89], 00:30:35.249 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 148], 00:30:35.249 | 99.99th=[ 148] 00:30:35.249 bw ( KiB/s): min= 8816, max=16904, per=19.82%, avg=12860.00, stdev=5719.08, samples=2 00:30:35.249 iops : min= 2204, max= 4226, avg=3215.00, stdev=1429.77, samples=2 00:30:35.249 lat (msec) : 4=0.19%, 10=32.13%, 20=37.57%, 50=24.65%, 100=3.48% 00:30:35.249 lat (msec) : 250=1.98% 00:30:35.249 cpu : usr=4.04%, sys=4.04%, ctx=234, majf=0, minf=1 00:30:35.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:30:35.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:35.249 issued rwts: total=3072,3342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:35.249 job3: (groupid=0, jobs=1): err= 0: pid=3548833: Fri Dec 13 09:41:47 2024 00:30:35.249 read: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec) 00:30:35.249 slat (usec): min=2, max=24624, avg=197.43, stdev=1404.78 00:30:35.249 clat (msec): min=4, max=119, avg=21.64, stdev=18.75 00:30:35.249 lat (msec): min=4, max=119, avg=21.84, stdev=18.95 00:30:35.250 clat percentiles (msec): 00:30:35.250 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:30:35.250 | 30.00th=[ 13], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 18], 00:30:35.250 | 70.00th=[ 21], 80.00th=[ 24], 90.00th=[ 36], 95.00th=[ 65], 00:30:35.250 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 120], 99.95th=[ 120], 00:30:35.250 | 99.99th=[ 120] 00:30:35.250 write: IOPS=2889, BW=11.3MiB/s (11.8MB/s)(11.5MiB/1015msec); 0 zone resets 00:30:35.250 slat (usec): min=2, max=21116, avg=160.40, stdev=1021.25 00:30:35.250 clat (msec): min=2, max=119, avg=25.01, stdev=21.20 00:30:35.250 lat (msec): min=2, max=119, avg=25.17, stdev=21.30 00:30:35.250 clat percentiles (msec): 00:30:35.250 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 12], 00:30:35.250 | 30.00th=[ 12], 40.00th=[ 17], 50.00th=[ 19], 60.00th=[ 22], 00:30:35.250 | 70.00th=[ 25], 80.00th=[ 27], 90.00th=[ 59], 95.00th=[ 81], 00:30:35.250 | 99.00th=[ 97], 99.50th=[ 102], 99.90th=[ 116], 99.95th=[ 120], 00:30:35.250 | 99.99th=[ 120] 00:30:35.250 bw ( KiB/s): min=10160, max=12288, per=17.30%, avg=11224.00, stdev=1504.72, samples=2 00:30:35.250 iops : min= 2540, max= 3072, avg=2806.00, stdev=376.18, samples=2 00:30:35.250 lat (msec) : 4=0.25%, 10=10.45%, 20=50.90%, 50=29.09%, 100=8.06% 00:30:35.250 lat (msec) : 250=1.24% 00:30:35.250 cpu : usr=2.56%, sys=4.44%, ctx=234, majf=0, minf=1 00:30:35.250 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:30:35.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:35.250 issued rwts: total=2560,2933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.250 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:35.250 00:30:35.250 Run status group 0 (all jobs): 00:30:35.250 READ: bw=57.8MiB/s (60.6MB/s), 9.85MiB/s-22.6MiB/s (10.3MB/s-23.7MB/s), io=58.7MiB (61.5MB), run=1003-1015msec 00:30:35.250 WRITE: bw=63.4MiB/s (66.4MB/s), 11.3MiB/s-23.9MiB/s (11.8MB/s-25.1MB/s), io=64.3MiB (67.4MB), run=1003-1015msec 00:30:35.250 00:30:35.250 Disk stats (read/write): 00:30:35.250 nvme0n1: ios=5144/5299, merge=0/0, ticks=17345/17240, in_queue=34585, util=98.00% 00:30:35.250 nvme0n2: ios=3095/3583, merge=0/0, ticks=38754/57458, in_queue=96212, util=98.48% 00:30:35.250 nvme0n3: ios=2184/2560, merge=0/0, ticks=35752/68982, in_queue=104734, util=98.33% 00:30:35.250 nvme0n4: ios=2327/2560, merge=0/0, ticks=47916/56767, in_queue=104683, util=98.42% 00:30:35.250 09:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:30:35.250 [global] 00:30:35.250 thread=1 00:30:35.250 invalidate=1 00:30:35.250 rw=randwrite 00:30:35.250 time_based=1 00:30:35.250 runtime=1 00:30:35.250 ioengine=libaio 00:30:35.250 direct=1 00:30:35.250 bs=4096 00:30:35.250 iodepth=128 00:30:35.250 norandommap=0 00:30:35.250 numjobs=1 00:30:35.250 00:30:35.250 verify_dump=1 00:30:35.250 verify_backlog=512 00:30:35.250 verify_state_save=0 00:30:35.250 do_verify=1 00:30:35.250 verify=crc32c-intel 00:30:35.250 [job0] 00:30:35.250 filename=/dev/nvme0n1 00:30:35.250 [job1] 00:30:35.250 filename=/dev/nvme0n2 00:30:35.250 [job2] 00:30:35.250 filename=/dev/nvme0n3 00:30:35.250 [job3] 00:30:35.250 filename=/dev/nvme0n4 00:30:35.250 Could not set queue depth (nvme0n1) 00:30:35.250 Could not set queue depth (nvme0n2) 00:30:35.250 Could not set queue depth (nvme0n3) 00:30:35.250 Could not set queue depth (nvme0n4) 00:30:35.508 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:35.508 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:35.508 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:35.508 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:35.508 fio-3.35 00:30:35.508 Starting 4 threads 00:30:36.885 00:30:36.885 job0: (groupid=0, jobs=1): err= 0: pid=3549202: Fri Dec 13 09:41:49 2024 00:30:36.885 read: IOPS=4125, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1003msec) 00:30:36.885 slat (nsec): min=1006, max=24439k, avg=136397.32, stdev=1111971.56 00:30:36.885 clat (usec): min=923, max=82860, avg=17782.99, stdev=14571.32 00:30:36.885 lat (usec): min=2444, max=82865, avg=17919.38, stdev=14654.21 00:30:36.885 clat percentiles (usec): 00:30:36.885 | 1.00th=[ 3851], 5.00th=[ 6587], 10.00th=[ 8094], 20.00th=[ 9372], 00:30:36.885 | 30.00th=[10159], 40.00th=[10683], 50.00th=[12387], 60.00th=[14222], 00:30:36.885 | 70.00th=[17171], 80.00th=[23725], 90.00th=[31851], 95.00th=[48497], 00:30:36.885 | 99.00th=[77071], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:30:36.885 | 99.99th=[83362] 00:30:36.885 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:30:36.885 slat (nsec): min=1735, max=10748k, avg=86648.81, stdev=488154.23 00:30:36.885 clat (usec): min=518, max=42131, avg=11648.12, stdev=5141.35 00:30:36.886 lat (usec): min=528, max=42158, avg=11734.77, stdev=5180.16 00:30:36.886 clat percentiles (usec): 00:30:36.886 | 1.00th=[ 2040], 5.00th=[ 7111], 10.00th=[ 8029], 20.00th=[ 8455], 00:30:36.886 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:30:36.886 | 70.00th=[11207], 80.00th=[14877], 90.00th=[17171], 95.00th=[22938], 00:30:36.886 | 99.00th=[33424], 99.50th=[33424], 99.90th=[35914], 99.95th=[35914], 00:30:36.886 | 99.99th=[42206] 00:30:36.886 bw ( KiB/s): min=16351, max=19800, per=25.87%, avg=18075.50, stdev=2438.81, samples=2 00:30:36.886 iops : min= 4087, max= 4950, avg=4518.50, stdev=610.23, samples=2 00:30:36.886 lat (usec) : 750=0.06%, 1000=0.01% 00:30:36.886 lat (msec) : 2=0.40%, 4=1.19%, 10=32.68%, 20=49.93%, 50=13.48% 00:30:36.886 lat (msec) : 100=2.25% 00:30:36.886 cpu : usr=1.60%, sys=3.69%, ctx=571, majf=0, minf=1 00:30:36.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:30:36.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:36.886 issued rwts: total=4138,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:36.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:36.886 job1: (groupid=0, jobs=1): err= 0: pid=3549203: Fri Dec 13 09:41:49 2024 00:30:36.886 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:30:36.886 slat (nsec): min=1310, max=17876k, avg=105391.77, stdev=792099.33 00:30:36.886 clat (usec): min=4034, max=37871, avg=14249.29, stdev=5309.09 00:30:36.886 lat (usec): min=4090, max=37899, avg=14354.69, stdev=5354.55 00:30:36.886 clat percentiles (usec): 00:30:36.886 | 1.00th=[ 6325], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10290], 00:30:36.886 | 30.00th=[10814], 40.00th=[11207], 50.00th=[12125], 60.00th=[13829], 00:30:36.886 | 70.00th=[16450], 80.00th=[18482], 90.00th=[20841], 95.00th=[25035], 00:30:36.886 | 99.00th=[32113], 99.50th=[33162], 99.90th=[35914], 99.95th=[35914], 00:30:36.886 | 99.99th=[38011] 00:30:36.886 write: IOPS=4539, BW=17.7MiB/s (18.6MB/s)(17.9MiB/1012msec); 0 zone resets 00:30:36.886 slat (nsec): min=1926, max=19151k, avg=113291.13, stdev=917580.82 00:30:36.886 clat (usec): min=1257, max=60041, avg=15198.95, stdev=8595.50 00:30:36.886 lat (usec): min=1265, max=60947, avg=15312.24, stdev=8653.73 00:30:36.886 clat percentiles (usec): 00:30:36.886 | 1.00th=[ 2114], 5.00th=[ 5276], 10.00th=[ 8979], 20.00th=[10290], 00:30:36.886 | 30.00th=[10683], 40.00th=[11207], 50.00th=[12387], 60.00th=[15008], 00:30:36.886 | 70.00th=[17171], 80.00th=[18744], 90.00th=[23462], 95.00th=[29230], 00:30:36.886 | 99.00th=[53216], 99.50th=[56361], 99.90th=[60031], 99.95th=[60031], 00:30:36.886 | 99.99th=[60031] 00:30:36.886 bw ( KiB/s): min=15248, max=20439, per=25.54%, avg=17843.50, stdev=3670.59, samples=2 00:30:36.886 iops : min= 3812, max= 5109, avg=4460.50, stdev=917.12, samples=2 00:30:36.886 lat (msec) : 2=0.38%, 4=1.13%, 10=15.70%, 20=66.94%, 50=14.89% 00:30:36.886 lat (msec) : 100=0.97% 00:30:36.886 cpu : usr=3.36%, sys=5.44%, ctx=306, majf=0, minf=1 00:30:36.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:30:36.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:36.886 issued rwts: total=4096,4594,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:36.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:36.886 job2: (groupid=0, jobs=1): err= 0: pid=3549204: Fri Dec 13 09:41:49 2024 00:30:36.886 read: IOPS=5059, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1012msec) 00:30:36.886 slat (nsec): min=1170, max=33074k, avg=90473.47, stdev=910280.62 00:30:36.886 clat (usec): min=2531, max=52885, avg=12961.67, stdev=6009.49 00:30:36.886 lat (usec): min=2535, max=52920, avg=13052.15, stdev=6084.01 00:30:36.886 clat percentiles (usec): 00:30:36.886 | 1.00th=[ 4948], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[ 9503], 00:30:36.886 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11600], 60.00th=[12518], 00:30:36.886 | 70.00th=[13566], 80.00th=[14353], 90.00th=[18220], 95.00th=[25297], 00:30:36.886 | 99.00th=[34866], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:30:36.886 | 99.99th=[52691] 00:30:36.886 write: IOPS=5297, BW=20.7MiB/s (21.7MB/s)(20.9MiB/1012msec); 0 zone resets 00:30:36.886 slat (nsec): min=1941, max=14593k, avg=74402.54, stdev=606309.22 00:30:36.886 clat (usec): min=397, max=30869, avg=11547.60, stdev=4692.07 00:30:36.886 lat (usec): min=632, max=31393, avg=11622.00, stdev=4730.04 00:30:36.886 clat percentiles (usec): 00:30:36.886 | 1.00th=[ 1549], 5.00th=[ 5145], 10.00th=[ 6718], 20.00th=[ 8225], 00:30:36.886 | 30.00th=[ 9372], 40.00th=[10552], 50.00th=[11076], 60.00th=[11600], 00:30:36.886 | 70.00th=[12125], 80.00th=[14222], 90.00th=[17171], 95.00th=[19792], 00:30:36.886 | 99.00th=[28967], 99.50th=[28967], 99.90th=[30802], 99.95th=[30802], 00:30:36.886 | 99.99th=[30802] 00:30:36.886 bw ( KiB/s): min=20686, max=21136, per=29.93%, avg=20911.00, stdev=318.20, samples=2 00:30:36.886 iops : min= 5171, max= 5284, avg=5227.50, stdev=79.90, samples=2 00:30:36.886 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.19% 00:30:36.886 lat (msec) : 2=0.39%, 4=1.05%, 10=28.99%, 20=63.62%, 50=5.70% 00:30:36.886 lat (msec) : 100=0.01% 00:30:36.886 cpu : usr=3.56%, sys=6.43%, ctx=398, majf=0, minf=1 00:30:36.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:30:36.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:36.886 issued rwts: total=5120,5361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:36.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:36.886 job3: (groupid=0, jobs=1): err= 0: pid=3549205: Fri Dec 13 09:41:49 2024 00:30:36.886 read: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1017msec) 00:30:36.886 slat (nsec): min=1527, max=18522k, avg=128885.26, stdev=1034064.93 00:30:36.886 clat (usec): min=5974, max=49328, avg=17167.06, stdev=7038.02 00:30:36.886 lat (usec): min=5985, max=49456, avg=17295.95, stdev=7112.85 00:30:36.886 clat percentiles (usec): 00:30:36.886 | 1.00th=[ 7177], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10421], 00:30:36.886 | 30.00th=[12125], 40.00th=[13960], 50.00th=[16188], 60.00th=[18482], 00:30:36.886 | 70.00th=[20579], 80.00th=[22152], 90.00th=[28967], 95.00th=[30802], 00:30:36.886 | 99.00th=[33817], 99.50th=[35390], 99.90th=[46924], 99.95th=[47449], 00:30:36.886 | 99.99th=[49546] 00:30:36.886 write: IOPS=3147, BW=12.3MiB/s (12.9MB/s)(12.5MiB/1017msec); 0 zone resets 00:30:36.886 slat (usec): min=2, max=17248, avg=184.32, stdev=1177.98 00:30:36.886 clat (usec): min=1514, max=112721, avg=23762.54, stdev=20559.19 00:30:36.886 lat (usec): min=1526, max=112734, avg=23946.86, stdev=20654.98 00:30:36.886 clat percentiles (msec): 00:30:36.886 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 11], 20.00th=[ 12], 00:30:36.886 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 18], 00:30:36.886 | 70.00th=[ 23], 80.00th=[ 31], 90.00th=[ 58], 95.00th=[ 70], 00:30:36.886 | 99.00th=[ 106], 99.50th=[ 111], 99.90th=[ 113], 99.95th=[ 113], 00:30:36.886 | 99.99th=[ 113] 00:30:36.886 bw ( KiB/s): min=12288, max=12344, per=17.63%, avg=12316.00, stdev=39.60, samples=2 00:30:36.886 iops : min= 3072, max= 3086, avg=3079.00, stdev= 9.90, samples=2 00:30:36.886 lat (msec) : 2=0.16%, 4=0.38%, 10=10.97%, 20=53.79%, 50=28.90% 00:30:36.886 lat (msec) : 100=5.05%, 250=0.75% 00:30:36.886 cpu : usr=3.25%, sys=3.54%, ctx=282, majf=0, minf=1 00:30:36.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:30:36.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:36.886 issued rwts: total=3072,3201,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:36.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:36.886 00:30:36.886 Run status group 0 (all jobs): 00:30:36.886 READ: bw=63.1MiB/s (66.2MB/s), 11.8MiB/s-19.8MiB/s (12.4MB/s-20.7MB/s), io=64.2MiB (67.3MB), run=1003-1017msec 00:30:36.886 WRITE: bw=68.2MiB/s (71.5MB/s), 12.3MiB/s-20.7MiB/s (12.9MB/s-21.7MB/s), io=69.4MiB (72.8MB), run=1003-1017msec 00:30:36.886 00:30:36.886 Disk stats (read/write): 00:30:36.886 nvme0n1: ios=3480/3584, merge=0/0, ticks=26104/17552, in_queue=43656, util=91.08% 00:30:36.886 nvme0n2: ios=3634/3933, merge=0/0, ticks=33939/45470, in_queue=79409, util=95.13% 00:30:36.886 nvme0n3: ios=4590/4608, merge=0/0, ticks=49103/44282, in_queue=93385, util=96.26% 00:30:36.886 nvme0n4: ios=2602/2751, merge=0/0, ticks=39585/64480, in_queue=104065, util=98.85% 00:30:36.886 09:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:30:36.886 09:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3549381 00:30:36.886 09:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:30:36.886 09:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:30:36.886 [global] 00:30:36.886 thread=1 00:30:36.886 invalidate=1 00:30:36.886 rw=read 00:30:36.886 time_based=1 00:30:36.886 runtime=10 00:30:36.886 ioengine=libaio 00:30:36.886 direct=1 00:30:36.886 bs=4096 00:30:36.886 iodepth=1 00:30:36.886 norandommap=1 00:30:36.886 numjobs=1 00:30:36.886 00:30:36.886 [job0] 00:30:36.886 filename=/dev/nvme0n1 00:30:36.886 [job1] 00:30:36.886 filename=/dev/nvme0n2 00:30:36.886 [job2] 00:30:36.886 filename=/dev/nvme0n3 00:30:36.886 [job3] 00:30:36.886 filename=/dev/nvme0n4 00:30:36.886 Could not set queue depth (nvme0n1) 00:30:36.886 Could not set queue depth (nvme0n2) 00:30:36.886 Could not set queue depth (nvme0n3) 00:30:36.886 Could not set queue depth (nvme0n4) 00:30:37.145 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:37.145 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:37.145 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:37.145 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:37.145 fio-3.35 00:30:37.145 Starting 4 threads 00:30:40.430 09:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:30:40.430 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39825408, buflen=4096 00:30:40.430 fio: pid=3549570, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:40.430 09:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:30:40.430 09:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:40.430 09:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:30:40.430 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=16896000, buflen=4096 00:30:40.430 fio: pid=3549569, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:40.430 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=44564480, buflen=4096 00:30:40.430 fio: pid=3549567, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:40.430 09:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:40.430 09:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:30:40.689 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52662272, buflen=4096 00:30:40.689 fio: pid=3549568, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:40.689 09:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:40.689 09:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:30:40.689 00:30:40.689 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3549567: Fri Dec 13 09:41:52 2024 00:30:40.689 read: IOPS=3473, BW=13.6MiB/s (14.2MB/s)(42.5MiB/3133msec) 00:30:40.689 slat (usec): min=2, max=17535, avg=12.35, stdev=270.65 00:30:40.689 clat (usec): min=188, max=41235, avg=272.46, stdev=395.46 00:30:40.689 lat (usec): min=195, max=41242, avg=284.81, stdev=481.04 00:30:40.689 clat percentiles (usec): 00:30:40.689 | 1.00th=[ 210], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 247], 00:30:40.689 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:30:40.689 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 326], 00:30:40.689 | 99.00th=[ 429], 99.50th=[ 482], 99.90th=[ 529], 99.95th=[ 660], 00:30:40.689 | 99.99th=[ 2769] 00:30:40.689 bw ( KiB/s): min=12280, max=14808, per=31.10%, avg=14006.00, stdev=931.89, samples=6 00:30:40.689 iops : min= 3070, max= 3702, avg=3501.50, stdev=232.97, samples=6 00:30:40.689 lat (usec) : 250=23.87%, 500=75.90%, 750=0.17% 00:30:40.689 lat (msec) : 2=0.03%, 4=0.01%, 50=0.01% 00:30:40.689 cpu : usr=1.12%, sys=2.75%, ctx=10887, majf=0, minf=2 00:30:40.689 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:40.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.689 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.689 issued rwts: total=10881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.689 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:40.689 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3549568: Fri Dec 13 09:41:52 2024 00:30:40.689 read: IOPS=3852, BW=15.0MiB/s (15.8MB/s)(50.2MiB/3338msec) 00:30:40.689 slat (usec): min=5, max=21218, avg=12.17, stdev=283.20 00:30:40.689 clat (usec): min=190, max=4203, avg=244.49, stdev=54.87 00:30:40.689 lat (usec): min=197, max=21558, avg=256.66, stdev=290.24 00:30:40.689 clat percentiles (usec): 00:30:40.689 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 221], 20.00th=[ 229], 00:30:40.689 | 30.00th=[ 233], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:30:40.689 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 293], 00:30:40.689 | 99.00th=[ 396], 99.50th=[ 490], 99.90th=[ 510], 99.95th=[ 515], 00:30:40.689 | 99.99th=[ 3163] 00:30:40.689 bw ( KiB/s): min=13416, max=16272, per=34.08%, avg=15349.17, stdev=1198.51, samples=6 00:30:40.689 iops : min= 3354, max= 4068, avg=3837.17, stdev=299.76, samples=6 00:30:40.689 lat (usec) : 250=75.35%, 500=24.33%, 750=0.30% 00:30:40.689 lat (msec) : 4=0.01%, 10=0.01% 00:30:40.689 cpu : usr=0.99%, sys=3.57%, ctx=12863, majf=0, minf=2 00:30:40.689 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:40.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.689 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.689 issued rwts: total=12858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.689 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:40.689 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3549569: Fri Dec 13 09:41:52 2024 00:30:40.689 read: IOPS=1398, BW=5591KiB/s (5726kB/s)(16.1MiB/2951msec) 00:30:40.689 slat (nsec): min=5173, max=70793, avg=7677.26, stdev=1912.41 00:30:40.689 clat (usec): min=210, max=42000, avg=701.27, stdev=4190.79 00:30:40.689 lat (usec): min=217, max=42011, avg=708.95, stdev=4191.42 00:30:40.689 clat percentiles (usec): 00:30:40.689 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 245], 00:30:40.690 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:30:40.690 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 330], 00:30:40.690 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:40.690 | 99.99th=[42206] 00:30:40.690 bw ( KiB/s): min= 96, max=14664, per=14.61%, avg=6582.40, stdev=7395.85, samples=5 00:30:40.690 iops : min= 24, max= 3666, avg=1645.60, stdev=1848.96, samples=5 00:30:40.690 lat (usec) : 250=33.66%, 500=65.17%, 750=0.02% 00:30:40.690 lat (msec) : 4=0.02%, 20=0.02%, 50=1.07% 00:30:40.690 cpu : usr=0.34%, sys=1.36%, ctx=4127, majf=0, minf=1 00:30:40.690 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:40.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.690 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.690 issued rwts: total=4126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.690 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:40.690 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3549570: Fri Dec 13 09:41:52 2024 00:30:40.690 read: IOPS=3600, BW=14.1MiB/s (14.7MB/s)(38.0MiB/2701msec) 00:30:40.690 slat (nsec): min=7498, max=68277, avg=9078.10, stdev=2599.75 00:30:40.690 clat (usec): min=197, max=1886, avg=266.19, stdev=45.53 00:30:40.690 lat (usec): min=205, max=1896, avg=275.27, stdev=45.91 00:30:40.690 clat percentiles (usec): 00:30:40.690 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 241], 00:30:40.690 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 260], 00:30:40.690 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 343], 00:30:40.690 | 99.00th=[ 474], 99.50th=[ 490], 99.90th=[ 537], 99.95th=[ 578], 00:30:40.690 | 99.99th=[ 1893] 00:30:40.690 bw ( KiB/s): min=13080, max=15600, per=32.27%, avg=14532.80, stdev=1122.01, samples=5 00:30:40.690 iops : min= 3270, max= 3900, avg=3633.20, stdev=280.50, samples=5 00:30:40.690 lat (usec) : 250=41.15%, 500=58.46%, 750=0.36% 00:30:40.690 lat (msec) : 2=0.02% 00:30:40.690 cpu : usr=1.33%, sys=3.67%, ctx=9725, majf=0, minf=2 00:30:40.690 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:40.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.690 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.690 issued rwts: total=9724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.690 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:40.690 00:30:40.690 Run status group 0 (all jobs): 00:30:40.690 READ: bw=44.0MiB/s (46.1MB/s), 5591KiB/s-15.0MiB/s (5726kB/s-15.8MB/s), io=147MiB (154MB), run=2701-3338msec 00:30:40.690 00:30:40.690 Disk stats (read/write): 00:30:40.690 nvme0n1: ios=10855/0, merge=0/0, ticks=2913/0, in_queue=2913, util=94.05% 00:30:40.690 nvme0n2: ios=11918/0, merge=0/0, ticks=3182/0, in_queue=3182, util=97.34% 00:30:40.690 nvme0n3: ios=4123/0, merge=0/0, ticks=2787/0, in_queue=2787, util=96.52% 00:30:40.690 nvme0n4: ios=9527/0, merge=0/0, ticks=2583/0, in_queue=2583, util=99.15% 00:30:40.948 09:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:40.948 09:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:30:41.207 09:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:41.207 09:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:30:41.207 09:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:41.207 09:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:30:41.466 09:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:41.466 09:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:30:41.725 09:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:30:41.725 09:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3549381 00:30:41.725 09:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:30:41.725 09:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:41.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:41.725 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:41.725 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:30:41.725 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:41.725 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:41.725 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:41.725 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:41.725 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:30:41.725 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:30:41.725 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:30:41.725 nvmf hotplug test: fio failed as expected 00:30:41.725 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:41.983 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:30:41.983 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:30:41.983 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:30:41.983 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:30:41.983 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:30:41.983 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:41.983 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:30:41.983 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:41.983 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:30:41.983 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:41.983 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:41.983 rmmod nvme_tcp 00:30:41.983 rmmod nvme_fabrics 00:30:41.983 rmmod nvme_keyring 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3546801 ']' 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3546801 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3546801 ']' 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3546801 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3546801 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3546801' 00:30:42.242 killing process with pid 3546801 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3546801 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3546801 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.242 09:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:44.777 00:30:44.777 real 0m25.274s 00:30:44.777 user 1m31.280s 00:30:44.777 sys 0m10.941s 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:44.777 ************************************ 00:30:44.777 END TEST nvmf_fio_target 00:30:44.777 ************************************ 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:44.777 ************************************ 00:30:44.777 START TEST nvmf_bdevio 00:30:44.777 ************************************ 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:44.777 * Looking for test storage... 00:30:44.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:44.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.777 --rc genhtml_branch_coverage=1 00:30:44.777 --rc genhtml_function_coverage=1 00:30:44.777 --rc genhtml_legend=1 00:30:44.777 --rc geninfo_all_blocks=1 00:30:44.777 --rc geninfo_unexecuted_blocks=1 00:30:44.777 00:30:44.777 ' 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:44.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.777 --rc genhtml_branch_coverage=1 00:30:44.777 --rc genhtml_function_coverage=1 00:30:44.777 --rc genhtml_legend=1 00:30:44.777 --rc geninfo_all_blocks=1 00:30:44.777 --rc geninfo_unexecuted_blocks=1 00:30:44.777 00:30:44.777 ' 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:44.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.777 --rc genhtml_branch_coverage=1 00:30:44.777 --rc genhtml_function_coverage=1 00:30:44.777 --rc genhtml_legend=1 00:30:44.777 --rc geninfo_all_blocks=1 00:30:44.777 --rc geninfo_unexecuted_blocks=1 00:30:44.777 00:30:44.777 ' 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:44.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.777 --rc genhtml_branch_coverage=1 00:30:44.777 --rc genhtml_function_coverage=1 00:30:44.777 --rc genhtml_legend=1 00:30:44.777 --rc geninfo_all_blocks=1 00:30:44.777 --rc geninfo_unexecuted_blocks=1 00:30:44.777 00:30:44.777 ' 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.777 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:30:44.778 09:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.043 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:50.043 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:50.044 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:50.044 Found net devices under 0000:af:00.0: cvl_0_0 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:50.044 Found net devices under 0000:af:00.1: cvl_0_1 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:30:50.044 00:30:50.044 --- 10.0.0.2 ping statistics --- 00:30:50.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.044 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:30:50.044 00:30:50.044 --- 10.0.0.1 ping statistics --- 00:30:50.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.044 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3553843 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3553843 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3553843 ']' 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.044 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:50.044 [2024-12-13 09:42:02.408697] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:50.303 [2024-12-13 09:42:02.409617] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:30:50.303 [2024-12-13 09:42:02.409649] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.303 [2024-12-13 09:42:02.478147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:50.303 [2024-12-13 09:42:02.519493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.303 [2024-12-13 09:42:02.519529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.303 [2024-12-13 09:42:02.519535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.303 [2024-12-13 09:42:02.519542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.303 [2024-12-13 09:42:02.519547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.303 [2024-12-13 09:42:02.520900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:50.303 [2024-12-13 09:42:02.521006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:50.303 [2024-12-13 09:42:02.521138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.303 [2024-12-13 09:42:02.521139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:50.303 [2024-12-13 09:42:02.589431] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:50.303 [2024-12-13 09:42:02.590255] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:50.303 [2024-12-13 09:42:02.590543] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:50.303 [2024-12-13 09:42:02.590966] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:50.303 [2024-12-13 09:42:02.591002] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:50.303 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:50.303 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:30:50.303 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:50.303 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:50.303 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:50.303 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.303 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:50.303 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.303 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:50.303 [2024-12-13 09:42:02.649775] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:50.563 Malloc0 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:50.563 [2024-12-13 09:42:02.733817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:50.563 { 00:30:50.563 "params": { 00:30:50.563 "name": "Nvme$subsystem", 00:30:50.563 "trtype": "$TEST_TRANSPORT", 00:30:50.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.563 "adrfam": "ipv4", 00:30:50.563 "trsvcid": "$NVMF_PORT", 00:30:50.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.563 "hdgst": ${hdgst:-false}, 00:30:50.563 "ddgst": ${ddgst:-false} 00:30:50.563 }, 00:30:50.563 "method": "bdev_nvme_attach_controller" 00:30:50.563 } 00:30:50.563 EOF 00:30:50.563 )") 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:30:50.563 09:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:50.563 "params": { 00:30:50.563 "name": "Nvme1", 00:30:50.563 "trtype": "tcp", 00:30:50.563 "traddr": "10.0.0.2", 00:30:50.563 "adrfam": "ipv4", 00:30:50.563 "trsvcid": "4420", 00:30:50.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.563 "hdgst": false, 00:30:50.563 "ddgst": false 00:30:50.563 }, 00:30:50.563 "method": "bdev_nvme_attach_controller" 00:30:50.563 }' 00:30:50.563 [2024-12-13 09:42:02.784408] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:30:50.563 [2024-12-13 09:42:02.784460] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3553874 ] 00:30:50.563 [2024-12-13 09:42:02.848706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:50.563 [2024-12-13 09:42:02.891647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.563 [2024-12-13 09:42:02.891742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.563 [2024-12-13 09:42:02.891744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.821 I/O targets: 00:30:50.821 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:30:50.821 00:30:50.821 00:30:50.821 CUnit - A unit testing framework for C - Version 2.1-3 00:30:50.821 http://cunit.sourceforge.net/ 00:30:50.821 00:30:50.821 00:30:50.821 Suite: bdevio tests on: Nvme1n1 00:30:50.821 Test: blockdev write read block ...passed 00:30:50.821 Test: blockdev write zeroes read block ...passed 00:30:50.821 Test: blockdev write zeroes read no split ...passed 00:30:50.821 Test: blockdev write zeroes read split ...passed 00:30:50.821 Test: blockdev write zeroes read split partial ...passed 00:30:51.079 Test: blockdev reset ...[2024-12-13 09:42:03.188541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:51.079 [2024-12-13 09:42:03.188603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa68610 (9): Bad file descriptor 00:30:51.079 [2024-12-13 09:42:03.192379] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:30:51.079 passed 00:30:51.079 Test: blockdev write read 8 blocks ...passed 00:30:51.079 Test: blockdev write read size > 128k ...passed 00:30:51.079 Test: blockdev write read invalid size ...passed 00:30:51.079 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:51.079 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:51.079 Test: blockdev write read max offset ...passed 00:30:51.079 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:51.079 Test: blockdev writev readv 8 blocks ...passed 00:30:51.079 Test: blockdev writev readv 30 x 1block ...passed 00:30:51.079 Test: blockdev writev readv block ...passed 00:30:51.079 Test: blockdev writev readv size > 128k ...passed 00:30:51.079 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:51.079 Test: blockdev comparev and writev ...[2024-12-13 09:42:03.402281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:51.079 [2024-12-13 09:42:03.402309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.079 [2024-12-13 09:42:03.402322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:51.079 [2024-12-13 09:42:03.402330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:51.079 [2024-12-13 09:42:03.402626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:51.079 [2024-12-13 09:42:03.402636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:51.079 [2024-12-13 09:42:03.402647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:51.079 [2024-12-13 09:42:03.402654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:51.079 [2024-12-13 09:42:03.402943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:51.079 [2024-12-13 09:42:03.402955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:51.079 [2024-12-13 09:42:03.402966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:51.079 [2024-12-13 09:42:03.402977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:51.079 [2024-12-13 09:42:03.403263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:51.079 [2024-12-13 09:42:03.403272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:51.079 [2024-12-13 09:42:03.403283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:51.079 [2024-12-13 09:42:03.403290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:51.079 passed 00:30:51.337 Test: blockdev nvme passthru rw ...passed 00:30:51.337 Test: blockdev nvme passthru vendor specific ...[2024-12-13 09:42:03.484841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:51.337 [2024-12-13 09:42:03.484858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:51.337 [2024-12-13 09:42:03.484970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:51.337 [2024-12-13 09:42:03.484979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:51.337 [2024-12-13 09:42:03.485092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:51.337 [2024-12-13 09:42:03.485101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:51.338 [2024-12-13 09:42:03.485217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:51.338 [2024-12-13 09:42:03.485227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:51.338 passed 00:30:51.338 Test: blockdev nvme admin passthru ...passed 00:30:51.338 Test: blockdev copy ...passed 00:30:51.338 00:30:51.338 Run Summary: Type Total Ran Passed Failed Inactive 00:30:51.338 suites 1 1 n/a 0 0 00:30:51.338 tests 23 23 23 0 0 00:30:51.338 asserts 152 152 152 0 n/a 00:30:51.338 00:30:51.338 Elapsed time = 0.995 seconds 00:30:51.338 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:51.338 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.338 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:51.338 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.338 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:30:51.338 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:30:51.338 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:51.338 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:30:51.338 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:51.338 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:30:51.338 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:51.338 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:51.338 rmmod nvme_tcp 00:30:51.596 rmmod nvme_fabrics 00:30:51.596 rmmod nvme_keyring 00:30:51.596 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:51.596 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:30:51.596 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:30:51.597 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3553843 ']' 00:30:51.597 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3553843 00:30:51.597 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3553843 ']' 00:30:51.597 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3553843 00:30:51.597 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:30:51.597 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:51.597 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3553843 00:30:51.597 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:30:51.597 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:30:51.597 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3553843' 00:30:51.597 killing process with pid 3553843 00:30:51.597 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3553843 00:30:51.597 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3553843 00:30:51.855 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:51.855 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:51.855 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:51.855 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:30:51.855 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:30:51.855 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:51.855 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:30:51.855 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:51.855 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:51.855 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.855 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.855 09:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.759 09:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:53.759 00:30:53.759 real 0m9.297s 00:30:53.759 user 0m7.794s 00:30:53.759 sys 0m4.831s 00:30:53.759 09:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:53.759 09:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:53.759 ************************************ 00:30:53.759 END TEST nvmf_bdevio 00:30:53.759 ************************************ 00:30:53.759 09:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:53.759 00:30:53.759 real 4m24.451s 00:30:53.759 user 9m4.729s 00:30:53.759 sys 1m44.869s 00:30:53.759 09:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:53.759 09:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:53.759 ************************************ 00:30:53.759 END TEST nvmf_target_core_interrupt_mode 00:30:53.759 ************************************ 00:30:53.759 09:42:06 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:53.759 09:42:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:53.759 09:42:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:53.759 09:42:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:54.018 ************************************ 00:30:54.018 START TEST nvmf_interrupt 00:30:54.018 ************************************ 00:30:54.018 09:42:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:54.018 * Looking for test storage... 00:30:54.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:54.018 09:42:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:54.018 09:42:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:30:54.018 09:42:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:54.018 09:42:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:54.018 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.018 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.018 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.018 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:54.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.019 --rc genhtml_branch_coverage=1 00:30:54.019 --rc genhtml_function_coverage=1 00:30:54.019 --rc genhtml_legend=1 00:30:54.019 --rc geninfo_all_blocks=1 00:30:54.019 --rc geninfo_unexecuted_blocks=1 00:30:54.019 00:30:54.019 ' 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:54.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.019 --rc genhtml_branch_coverage=1 00:30:54.019 --rc genhtml_function_coverage=1 00:30:54.019 --rc genhtml_legend=1 00:30:54.019 --rc geninfo_all_blocks=1 00:30:54.019 --rc geninfo_unexecuted_blocks=1 00:30:54.019 00:30:54.019 ' 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:54.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.019 --rc genhtml_branch_coverage=1 00:30:54.019 --rc genhtml_function_coverage=1 00:30:54.019 --rc genhtml_legend=1 00:30:54.019 --rc geninfo_all_blocks=1 00:30:54.019 --rc geninfo_unexecuted_blocks=1 00:30:54.019 00:30:54.019 ' 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:54.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.019 --rc genhtml_branch_coverage=1 00:30:54.019 --rc genhtml_function_coverage=1 00:30:54.019 --rc genhtml_legend=1 00:30:54.019 --rc geninfo_all_blocks=1 00:30:54.019 --rc geninfo_unexecuted_blocks=1 00:30:54.019 00:30:54.019 ' 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:30:54.019 09:42:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:59.291 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:59.291 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:59.291 Found net devices under 0000:af:00.0: cvl_0_0 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.291 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:59.292 Found net devices under 0000:af:00.1: cvl_0_1 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.292 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.550 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:59.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:30:59.551 00:30:59.551 --- 10.0.0.2 ping statistics --- 00:30:59.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.551 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:30:59.551 00:30:59.551 --- 10.0.0.1 ping statistics --- 00:30:59.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.551 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3557963 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3557963 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3557963 ']' 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:59.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:59.551 09:42:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:59.810 [2024-12-13 09:42:11.964162] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:59.810 [2024-12-13 09:42:11.965087] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:30:59.810 [2024-12-13 09:42:11.965125] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:59.810 [2024-12-13 09:42:12.030526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:59.810 [2024-12-13 09:42:12.071180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.810 [2024-12-13 09:42:12.071214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.810 [2024-12-13 09:42:12.071221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.810 [2024-12-13 09:42:12.071226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.810 [2024-12-13 09:42:12.071231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.810 [2024-12-13 09:42:12.072304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.810 [2024-12-13 09:42:12.072308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.810 [2024-12-13 09:42:12.139754] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:59.810 [2024-12-13 09:42:12.139986] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:59.810 [2024-12-13 09:42:12.140041] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:59.810 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:59.810 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:30:59.810 09:42:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:59.810 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:59.810 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:00.069 5000+0 records in 00:31:00.069 5000+0 records out 00:31:00.069 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0175688 s, 583 MB/s 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:00.069 AIO0 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:00.069 [2024-12-13 09:42:12.280840] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:00.069 [2024-12-13 09:42:12.305078] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3557963 0 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3557963 0 idle 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3557963 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3557963 -w 256 00:31:00.069 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3557963 root 20 0 128.2g 46848 34560 S 0.0 0.1 0:00.23 reactor_0' 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3557963 root 20 0 128.2g 46848 34560 S 0.0 0.1 0:00.23 reactor_0 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3557963 1 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3557963 1 idle 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3557963 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3557963 -w 256 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3557968 root 20 0 128.2g 46848 34560 S 0.0 0.1 0:00.00 reactor_1' 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3557968 root 20 0 128.2g 46848 34560 S 0.0 0.1 0:00.00 reactor_1 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3558113 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3557963 0 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3557963 0 busy 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3557963 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3557963 -w 256 00:31:00.329 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:00.588 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3557963 root 20 0 128.2g 47616 34560 R 99.9 0.1 0:00.38 reactor_0' 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3557963 root 20 0 128.2g 47616 34560 R 99.9 0.1 0:00.38 reactor_0 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3557963 1 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3557963 1 busy 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3557963 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3557963 -w 256 00:31:00.589 09:42:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:00.847 09:42:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3557968 root 20 0 128.2g 47616 34560 R 93.8 0.1 0:00.25 reactor_1' 00:31:00.847 09:42:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3557968 root 20 0 128.2g 47616 34560 R 93.8 0.1 0:00.25 reactor_1 00:31:00.847 09:42:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:00.847 09:42:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:00.847 09:42:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:31:00.847 09:42:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:31:00.847 09:42:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:00.847 09:42:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:00.847 09:42:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:00.847 09:42:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:00.847 09:42:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3558113 00:31:10.825 Initializing NVMe Controllers 00:31:10.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:10.825 Controller IO queue size 256, less than required. 00:31:10.825 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:10.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:10.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:10.825 Initialization complete. Launching workers. 00:31:10.825 ======================================================== 00:31:10.825 Latency(us) 00:31:10.825 Device Information : IOPS MiB/s Average min max 00:31:10.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16659.19 65.07 15375.74 2693.63 19374.07 00:31:10.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16427.69 64.17 15591.38 4347.19 19115.09 00:31:10.826 ======================================================== 00:31:10.826 Total : 33086.88 129.25 15482.81 2693.63 19374.07 00:31:10.826 00:31:10.826 09:42:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:10.826 09:42:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3557963 0 00:31:10.826 09:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3557963 0 idle 00:31:10.826 09:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3557963 00:31:10.826 09:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:10.826 09:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:10.826 09:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:10.826 09:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:10.826 09:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:10.826 09:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:10.826 09:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:10.826 09:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:10.826 09:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:10.826 09:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3557963 -w 256 00:31:10.826 09:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3557963 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:20.21 reactor_0' 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3557963 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:20.21 reactor_0 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3557963 1 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3557963 1 idle 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3557963 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3557963 -w 256 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:10.826 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3557968 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:10.00 reactor_1' 00:31:11.085 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3557968 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:10.00 reactor_1 00:31:11.085 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:11.085 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:11.085 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:11.085 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:11.085 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:11.085 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:11.085 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:11.085 09:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:11.085 09:42:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:11.344 09:42:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:31:11.344 09:42:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:31:11.344 09:42:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:11.344 09:42:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:11.344 09:42:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3557963 0 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3557963 0 idle 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3557963 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3557963 -w 256 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3557963 root 20 0 128.2g 73728 34560 S 6.7 0.1 0:20.43 reactor_0' 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3557963 root 20 0 128.2g 73728 34560 S 6.7 0.1 0:20.43 reactor_0 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3557963 1 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3557963 1 idle 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3557963 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3557963 -w 256 00:31:13.878 09:42:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:13.878 09:42:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3557968 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.07 reactor_1' 00:31:13.878 09:42:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:13.878 09:42:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3557968 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.07 reactor_1 00:31:13.878 09:42:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:13.878 09:42:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:13.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.879 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.879 rmmod nvme_tcp 00:31:13.879 rmmod nvme_fabrics 00:31:13.879 rmmod nvme_keyring 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3557963 ']' 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3557963 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3557963 ']' 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3557963 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3557963 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3557963' 00:31:14.138 killing process with pid 3557963 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3557963 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3557963 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:31:14.138 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:31:14.397 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:14.397 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:14.397 09:42:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.397 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:14.397 09:42:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.301 09:42:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:16.301 00:31:16.301 real 0m22.418s 00:31:16.301 user 0m39.564s 00:31:16.301 sys 0m8.021s 00:31:16.301 09:42:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:16.301 09:42:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:16.301 ************************************ 00:31:16.301 END TEST nvmf_interrupt 00:31:16.301 ************************************ 00:31:16.301 00:31:16.301 real 26m40.236s 00:31:16.301 user 55m46.747s 00:31:16.301 sys 8m50.504s 00:31:16.301 09:42:28 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:16.301 09:42:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:16.301 ************************************ 00:31:16.301 END TEST nvmf_tcp 00:31:16.301 ************************************ 00:31:16.301 09:42:28 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:31:16.301 09:42:28 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:16.301 09:42:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:16.301 09:42:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:16.301 09:42:28 -- common/autotest_common.sh@10 -- # set +x 00:31:16.301 ************************************ 00:31:16.301 START TEST spdkcli_nvmf_tcp 00:31:16.301 ************************************ 00:31:16.301 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:16.561 * Looking for test storage... 00:31:16.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:16.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.561 --rc genhtml_branch_coverage=1 00:31:16.561 --rc genhtml_function_coverage=1 00:31:16.561 --rc genhtml_legend=1 00:31:16.561 --rc geninfo_all_blocks=1 00:31:16.561 --rc geninfo_unexecuted_blocks=1 00:31:16.561 00:31:16.561 ' 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:16.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.561 --rc genhtml_branch_coverage=1 00:31:16.561 --rc genhtml_function_coverage=1 00:31:16.561 --rc genhtml_legend=1 00:31:16.561 --rc geninfo_all_blocks=1 00:31:16.561 --rc geninfo_unexecuted_blocks=1 00:31:16.561 00:31:16.561 ' 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:16.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.561 --rc genhtml_branch_coverage=1 00:31:16.561 --rc genhtml_function_coverage=1 00:31:16.561 --rc genhtml_legend=1 00:31:16.561 --rc geninfo_all_blocks=1 00:31:16.561 --rc geninfo_unexecuted_blocks=1 00:31:16.561 00:31:16.561 ' 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:16.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.561 --rc genhtml_branch_coverage=1 00:31:16.561 --rc genhtml_function_coverage=1 00:31:16.561 --rc genhtml_legend=1 00:31:16.561 --rc geninfo_all_blocks=1 00:31:16.561 --rc geninfo_unexecuted_blocks=1 00:31:16.561 00:31:16.561 ' 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.561 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:16.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3560848 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3560848 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3560848 ']' 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:16.562 09:42:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:16.562 [2024-12-13 09:42:28.899486] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:31:16.562 [2024-12-13 09:42:28.899534] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3560848 ] 00:31:16.821 [2024-12-13 09:42:28.961386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:16.821 [2024-12-13 09:42:29.004086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.821 [2024-12-13 09:42:29.004090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.821 09:42:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:16.821 09:42:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:31:16.821 09:42:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:16.821 09:42:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:16.821 09:42:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:16.821 09:42:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:16.821 09:42:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:16.821 09:42:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:16.821 09:42:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:16.821 09:42:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:16.821 09:42:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:16.821 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:16.821 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:16.821 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:16.821 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:16.821 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:16.821 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:16.821 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:16.821 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:16.821 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:16.821 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:16.821 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:16.821 ' 00:31:19.351 [2024-12-13 09:42:31.642647] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.826 [2024-12-13 09:42:32.906790] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:23.369 [2024-12-13 09:42:35.234067] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:25.274 [2024-12-13 09:42:37.240300] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:26.663 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:26.663 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:26.663 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:26.663 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:26.663 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:26.663 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:26.663 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:26.663 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:26.663 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:26.663 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:26.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:26.663 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:26.663 09:42:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:26.663 09:42:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:26.663 09:42:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:26.663 09:42:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:26.663 09:42:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:26.663 09:42:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:26.663 09:42:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:26.663 09:42:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:26.929 09:42:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:27.187 09:42:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:27.187 09:42:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:27.187 09:42:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:27.187 09:42:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:27.187 09:42:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:27.187 09:42:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:27.187 09:42:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:27.187 09:42:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:27.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:27.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:27.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:27.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:27.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:27.187 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:27.187 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:27.187 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:27.187 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:27.187 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:27.187 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:27.187 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:27.187 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:27.187 ' 00:31:32.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:32.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:32.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:32.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:32.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:32.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:32.452 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:32.452 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:32.452 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:32.452 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:32.452 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:32.452 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:32.452 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:32.452 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3560848 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3560848 ']' 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3560848 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3560848 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3560848' 00:31:32.452 killing process with pid 3560848 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3560848 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3560848 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3560848 ']' 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3560848 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3560848 ']' 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3560848 00:31:32.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3560848) - No such process 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3560848 is not found' 00:31:32.452 Process with pid 3560848 is not found 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:32.452 00:31:32.452 real 0m16.081s 00:31:32.452 user 0m33.943s 00:31:32.452 sys 0m0.684s 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:32.452 09:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:32.452 ************************************ 00:31:32.452 END TEST spdkcli_nvmf_tcp 00:31:32.452 ************************************ 00:31:32.452 09:42:44 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:32.452 09:42:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:32.452 09:42:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:32.452 09:42:44 -- common/autotest_common.sh@10 -- # set +x 00:31:32.452 ************************************ 00:31:32.452 START TEST nvmf_identify_passthru 00:31:32.452 ************************************ 00:31:32.452 09:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:32.711 * Looking for test storage... 00:31:32.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:32.711 09:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:32.711 09:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:31:32.711 09:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:32.711 09:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:31:32.711 09:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:32.711 09:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:32.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.711 --rc genhtml_branch_coverage=1 00:31:32.711 --rc genhtml_function_coverage=1 00:31:32.711 --rc genhtml_legend=1 00:31:32.711 --rc geninfo_all_blocks=1 00:31:32.711 --rc geninfo_unexecuted_blocks=1 00:31:32.711 00:31:32.711 ' 00:31:32.711 09:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:32.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.711 --rc genhtml_branch_coverage=1 00:31:32.711 --rc genhtml_function_coverage=1 00:31:32.711 --rc genhtml_legend=1 00:31:32.711 --rc geninfo_all_blocks=1 00:31:32.711 --rc geninfo_unexecuted_blocks=1 00:31:32.711 00:31:32.711 ' 00:31:32.711 09:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:32.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.711 --rc genhtml_branch_coverage=1 00:31:32.711 --rc genhtml_function_coverage=1 00:31:32.711 --rc genhtml_legend=1 00:31:32.711 --rc geninfo_all_blocks=1 00:31:32.711 --rc geninfo_unexecuted_blocks=1 00:31:32.711 00:31:32.711 ' 00:31:32.711 09:42:44 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:32.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.711 --rc genhtml_branch_coverage=1 00:31:32.711 --rc genhtml_function_coverage=1 00:31:32.711 --rc genhtml_legend=1 00:31:32.711 --rc geninfo_all_blocks=1 00:31:32.711 --rc geninfo_unexecuted_blocks=1 00:31:32.711 00:31:32.711 ' 00:31:32.711 09:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.711 09:42:44 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.711 09:42:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.711 09:42:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.711 09:42:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.711 09:42:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:32.711 09:42:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:32.711 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:32.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:32.712 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:32.712 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:32.712 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:32.712 09:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:32.712 09:42:44 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:32.712 09:42:44 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.712 09:42:44 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.712 09:42:44 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.712 09:42:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.712 09:42:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.712 09:42:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.712 09:42:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:32.712 09:42:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.712 09:42:44 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:32.712 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:32.712 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:32.712 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:32.712 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:32.712 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:32.712 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.712 09:42:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:32.712 09:42:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.712 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:32.712 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:32.712 09:42:44 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:31:32.712 09:42:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:37.977 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:37.977 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:37.977 Found net devices under 0000:af:00.0: cvl_0_0 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:37.977 Found net devices under 0000:af:00.1: cvl_0_1 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:37.977 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:38.236 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:38.236 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:38.236 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:38.236 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:38.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:38.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:31:38.236 00:31:38.236 --- 10.0.0.2 ping statistics --- 00:31:38.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.236 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:31:38.236 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:38.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:38.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:31:38.236 00:31:38.236 --- 10.0.0.1 ping statistics --- 00:31:38.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.236 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:31:38.236 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:38.236 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:31:38.236 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:38.236 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:38.237 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:38.237 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:38.237 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:38.237 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:38.237 09:42:50 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:38.237 09:42:50 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:38.237 09:42:50 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:38.237 09:42:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:38.237 09:42:50 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:38.237 09:42:50 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:38.237 09:42:50 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:31:38.237 09:42:50 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:38.237 09:42:50 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:38.237 09:42:50 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:38.237 09:42:50 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:31:38.237 09:42:50 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:38.237 09:42:50 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:38.237 09:42:50 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:38.495 09:42:50 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:38.495 09:42:50 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:31:38.495 09:42:50 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:31:38.495 09:42:50 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:31:38.495 09:42:50 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:31:38.495 09:42:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:31:38.495 09:42:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:38.495 09:42:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:42.681 09:42:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:31:42.681 09:42:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:42.681 09:42:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:31:42.681 09:42:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:46.876 09:42:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:46.876 09:42:58 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:46.876 09:42:58 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:46.876 09:42:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:46.876 09:42:58 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:46.876 09:42:58 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:46.876 09:42:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:46.876 09:42:58 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3567731 00:31:46.876 09:42:58 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:46.876 09:42:58 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3567731 00:31:46.876 09:42:58 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3567731 ']' 00:31:46.876 09:42:58 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.876 09:42:58 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:46.876 09:42:58 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:46.876 09:42:58 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.876 09:42:58 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:46.876 09:42:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:46.876 [2024-12-13 09:42:58.945281] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:31:46.876 [2024-12-13 09:42:58.945329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.876 [2024-12-13 09:42:59.010404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:46.876 [2024-12-13 09:42:59.052528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.876 [2024-12-13 09:42:59.052565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.876 [2024-12-13 09:42:59.052571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.876 [2024-12-13 09:42:59.052577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.876 [2024-12-13 09:42:59.052582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.876 [2024-12-13 09:42:59.053918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.876 [2024-12-13 09:42:59.054020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:46.876 [2024-12-13 09:42:59.054105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:46.876 [2024-12-13 09:42:59.054106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.876 09:42:59 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.876 09:42:59 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:31:46.876 09:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:46.876 09:42:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.876 09:42:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:46.876 INFO: Log level set to 20 00:31:46.876 INFO: Requests: 00:31:46.876 { 00:31:46.876 "jsonrpc": "2.0", 00:31:46.876 "method": "nvmf_set_config", 00:31:46.876 "id": 1, 00:31:46.876 "params": { 00:31:46.876 "admin_cmd_passthru": { 00:31:46.876 "identify_ctrlr": true 00:31:46.876 } 00:31:46.876 } 00:31:46.876 } 00:31:46.876 00:31:46.876 INFO: response: 00:31:46.876 { 00:31:46.876 "jsonrpc": "2.0", 00:31:46.876 "id": 1, 00:31:46.876 "result": true 00:31:46.876 } 00:31:46.876 00:31:46.876 09:42:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.876 09:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:46.876 09:42:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.876 09:42:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:46.876 INFO: Setting log level to 20 00:31:46.876 INFO: Setting log level to 20 00:31:46.876 INFO: Log level set to 20 00:31:46.876 INFO: Log level set to 20 00:31:46.876 INFO: Requests: 00:31:46.876 { 00:31:46.876 "jsonrpc": "2.0", 00:31:46.876 "method": "framework_start_init", 00:31:46.876 "id": 1 00:31:46.876 } 00:31:46.876 00:31:46.876 INFO: Requests: 00:31:46.876 { 00:31:46.876 "jsonrpc": "2.0", 00:31:46.876 "method": "framework_start_init", 00:31:46.876 "id": 1 00:31:46.876 } 00:31:46.876 00:31:46.876 [2024-12-13 09:42:59.166525] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:46.876 INFO: response: 00:31:46.876 { 00:31:46.876 "jsonrpc": "2.0", 00:31:46.876 "id": 1, 00:31:46.876 "result": true 00:31:46.876 } 00:31:46.876 00:31:46.876 INFO: response: 00:31:46.876 { 00:31:46.876 "jsonrpc": "2.0", 00:31:46.876 "id": 1, 00:31:46.876 "result": true 00:31:46.876 } 00:31:46.876 00:31:46.876 09:42:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.876 09:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:46.876 09:42:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.876 09:42:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:46.876 INFO: Setting log level to 40 00:31:46.876 INFO: Setting log level to 40 00:31:46.876 INFO: Setting log level to 40 00:31:46.876 [2024-12-13 09:42:59.179833] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.876 09:42:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.876 09:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:46.876 09:42:59 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:46.876 09:42:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:46.876 09:42:59 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:31:46.876 09:42:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.876 09:42:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.162 Nvme0n1 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.162 [2024-12-13 09:43:02.098411] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.162 [ 00:31:50.162 { 00:31:50.162 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:50.162 "subtype": "Discovery", 00:31:50.162 "listen_addresses": [], 00:31:50.162 "allow_any_host": true, 00:31:50.162 "hosts": [] 00:31:50.162 }, 00:31:50.162 { 00:31:50.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:50.162 "subtype": "NVMe", 00:31:50.162 "listen_addresses": [ 00:31:50.162 { 00:31:50.162 "trtype": "TCP", 00:31:50.162 "adrfam": "IPv4", 00:31:50.162 "traddr": "10.0.0.2", 00:31:50.162 "trsvcid": "4420" 00:31:50.162 } 00:31:50.162 ], 00:31:50.162 "allow_any_host": true, 00:31:50.162 "hosts": [], 00:31:50.162 "serial_number": "SPDK00000000000001", 00:31:50.162 "model_number": "SPDK bdev Controller", 00:31:50.162 "max_namespaces": 1, 00:31:50.162 "min_cntlid": 1, 00:31:50.162 "max_cntlid": 65519, 00:31:50.162 "namespaces": [ 00:31:50.162 { 00:31:50.162 "nsid": 1, 00:31:50.162 "bdev_name": "Nvme0n1", 00:31:50.162 "name": "Nvme0n1", 00:31:50.162 "nguid": "E8F4BD971F274E1E8BB64B7F8488B103", 00:31:50.162 "uuid": "e8f4bd97-1f27-4e1e-8bb6-4b7f8488b103" 00:31:50.162 } 00:31:50.162 ] 00:31:50.162 } 00:31:50.162 ] 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:50.162 09:43:02 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:50.162 09:43:02 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:50.162 09:43:02 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:31:50.162 09:43:02 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:50.162 09:43:02 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:31:50.162 09:43:02 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:50.162 09:43:02 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:50.162 rmmod nvme_tcp 00:31:50.162 rmmod nvme_fabrics 00:31:50.162 rmmod nvme_keyring 00:31:50.162 09:43:02 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:50.162 09:43:02 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:31:50.162 09:43:02 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:31:50.162 09:43:02 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3567731 ']' 00:31:50.162 09:43:02 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3567731 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3567731 ']' 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3567731 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:50.162 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3567731 00:31:50.421 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:50.421 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:50.421 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3567731' 00:31:50.421 killing process with pid 3567731 00:31:50.421 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3567731 00:31:50.421 09:43:02 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3567731 00:31:51.798 09:43:04 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:51.798 09:43:04 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:51.798 09:43:04 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:51.798 09:43:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:31:51.798 09:43:04 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:31:51.798 09:43:04 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:51.798 09:43:04 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:31:51.798 09:43:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:51.798 09:43:04 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:51.798 09:43:04 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.798 09:43:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:51.798 09:43:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.333 09:43:06 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:54.333 00:31:54.333 real 0m21.342s 00:31:54.333 user 0m26.522s 00:31:54.333 sys 0m5.814s 00:31:54.333 09:43:06 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.333 09:43:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:54.333 ************************************ 00:31:54.333 END TEST nvmf_identify_passthru 00:31:54.333 ************************************ 00:31:54.333 09:43:06 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:54.333 09:43:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:54.333 09:43:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.333 09:43:06 -- common/autotest_common.sh@10 -- # set +x 00:31:54.333 ************************************ 00:31:54.333 START TEST nvmf_dif 00:31:54.333 ************************************ 00:31:54.333 09:43:06 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:54.333 * Looking for test storage... 00:31:54.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:54.333 09:43:06 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:54.333 09:43:06 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:31:54.333 09:43:06 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:54.333 09:43:06 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:54.333 09:43:06 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:31:54.333 09:43:06 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:54.333 09:43:06 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:54.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.333 --rc genhtml_branch_coverage=1 00:31:54.333 --rc genhtml_function_coverage=1 00:31:54.333 --rc genhtml_legend=1 00:31:54.333 --rc geninfo_all_blocks=1 00:31:54.333 --rc geninfo_unexecuted_blocks=1 00:31:54.333 00:31:54.333 ' 00:31:54.333 09:43:06 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:54.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.333 --rc genhtml_branch_coverage=1 00:31:54.333 --rc genhtml_function_coverage=1 00:31:54.333 --rc genhtml_legend=1 00:31:54.333 --rc geninfo_all_blocks=1 00:31:54.333 --rc geninfo_unexecuted_blocks=1 00:31:54.333 00:31:54.333 ' 00:31:54.333 09:43:06 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:54.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.333 --rc genhtml_branch_coverage=1 00:31:54.333 --rc genhtml_function_coverage=1 00:31:54.333 --rc genhtml_legend=1 00:31:54.333 --rc geninfo_all_blocks=1 00:31:54.333 --rc geninfo_unexecuted_blocks=1 00:31:54.333 00:31:54.333 ' 00:31:54.333 09:43:06 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:54.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.333 --rc genhtml_branch_coverage=1 00:31:54.333 --rc genhtml_function_coverage=1 00:31:54.333 --rc genhtml_legend=1 00:31:54.333 --rc geninfo_all_blocks=1 00:31:54.333 --rc geninfo_unexecuted_blocks=1 00:31:54.333 00:31:54.333 ' 00:31:54.333 09:43:06 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.333 09:43:06 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.334 09:43:06 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.334 09:43:06 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.334 09:43:06 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.334 09:43:06 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.334 09:43:06 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.334 09:43:06 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.334 09:43:06 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.334 09:43:06 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:54.334 09:43:06 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:54.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:54.334 09:43:06 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:54.334 09:43:06 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:54.334 09:43:06 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:54.334 09:43:06 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:54.334 09:43:06 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.334 09:43:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:54.334 09:43:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:54.334 09:43:06 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:31:54.334 09:43:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.604 09:43:11 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:59.605 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:59.605 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:59.605 Found net devices under 0000:af:00.0: cvl_0_0 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:59.605 Found net devices under 0000:af:00.1: cvl_0_1 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:31:59.605 00:31:59.605 --- 10.0.0.2 ping statistics --- 00:31:59.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.605 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:31:59.605 09:43:11 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:31:59.864 00:31:59.864 --- 10.0.0.1 ping statistics --- 00:31:59.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.864 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:31:59.864 09:43:11 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.864 09:43:11 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:31:59.864 09:43:11 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:59.864 09:43:11 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:02.398 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:32:02.398 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:02.398 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:32:02.398 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:32:02.398 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:32:02.398 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:32:02.398 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:32:02.398 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:32:02.398 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:32:02.398 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:32:02.398 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:32:02.398 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:32:02.398 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:32:02.398 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:32:02.398 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:32:02.398 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:32:02.398 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:32:02.398 09:43:14 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:02.398 09:43:14 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:02.398 09:43:14 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:02.398 09:43:14 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:02.398 09:43:14 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:02.398 09:43:14 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:02.657 09:43:14 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:02.657 09:43:14 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:02.657 09:43:14 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:02.657 09:43:14 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:02.657 09:43:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:02.657 09:43:14 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3573127 00:32:02.657 09:43:14 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3573127 00:32:02.657 09:43:14 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:02.657 09:43:14 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3573127 ']' 00:32:02.657 09:43:14 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.657 09:43:14 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:02.657 09:43:14 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.657 09:43:14 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:02.657 09:43:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:02.657 [2024-12-13 09:43:14.849326] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:32:02.657 [2024-12-13 09:43:14.849373] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:02.657 [2024-12-13 09:43:14.917902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.657 [2024-12-13 09:43:14.958430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:02.657 [2024-12-13 09:43:14.958472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:02.657 [2024-12-13 09:43:14.958479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:02.657 [2024-12-13 09:43:14.958484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:02.657 [2024-12-13 09:43:14.958489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:02.657 [2024-12-13 09:43:14.959011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.915 09:43:15 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:02.915 09:43:15 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:32:02.915 09:43:15 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:02.915 09:43:15 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:02.915 09:43:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:02.915 09:43:15 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:02.915 09:43:15 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:02.915 09:43:15 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:02.915 09:43:15 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.915 09:43:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:02.915 [2024-12-13 09:43:15.096706] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.915 09:43:15 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.915 09:43:15 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:02.915 09:43:15 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:02.915 09:43:15 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:02.915 09:43:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:02.915 ************************************ 00:32:02.915 START TEST fio_dif_1_default 00:32:02.915 ************************************ 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:02.915 bdev_null0 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:02.915 [2024-12-13 09:43:15.156990] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:02.915 09:43:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:02.915 { 00:32:02.915 "params": { 00:32:02.915 "name": "Nvme$subsystem", 00:32:02.915 "trtype": "$TEST_TRANSPORT", 00:32:02.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:02.916 "adrfam": "ipv4", 00:32:02.916 "trsvcid": "$NVMF_PORT", 00:32:02.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:02.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:02.916 "hdgst": ${hdgst:-false}, 00:32:02.916 "ddgst": ${ddgst:-false} 00:32:02.916 }, 00:32:02.916 "method": "bdev_nvme_attach_controller" 00:32:02.916 } 00:32:02.916 EOF 00:32:02.916 )") 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:02.916 "params": { 00:32:02.916 "name": "Nvme0", 00:32:02.916 "trtype": "tcp", 00:32:02.916 "traddr": "10.0.0.2", 00:32:02.916 "adrfam": "ipv4", 00:32:02.916 "trsvcid": "4420", 00:32:02.916 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:02.916 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:02.916 "hdgst": false, 00:32:02.916 "ddgst": false 00:32:02.916 }, 00:32:02.916 "method": "bdev_nvme_attach_controller" 00:32:02.916 }' 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:02.916 09:43:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:03.173 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:03.173 fio-3.35 00:32:03.173 Starting 1 thread 00:32:15.382 00:32:15.382 filename0: (groupid=0, jobs=1): err= 0: pid=3573458: Fri Dec 13 09:43:26 2024 00:32:15.382 read: IOPS=192, BW=771KiB/s (790kB/s)(7728KiB/10017msec) 00:32:15.382 slat (nsec): min=5853, max=46211, avg=6372.40, stdev=1637.89 00:32:15.382 clat (usec): min=384, max=42612, avg=20720.56, stdev=20444.85 00:32:15.382 lat (usec): min=390, max=42619, avg=20726.93, stdev=20444.75 00:32:15.382 clat percentiles (usec): 00:32:15.382 | 1.00th=[ 400], 5.00th=[ 408], 10.00th=[ 412], 20.00th=[ 429], 00:32:15.382 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 627], 60.00th=[40633], 00:32:15.382 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:32:15.382 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:32:15.382 | 99.99th=[42730] 00:32:15.382 bw ( KiB/s): min= 704, max= 832, per=99.94%, avg=771.20, stdev=32.67, samples=20 00:32:15.382 iops : min= 176, max= 208, avg=192.80, stdev= 8.17, samples=20 00:32:15.382 lat (usec) : 500=29.09%, 750=21.22% 00:32:15.382 lat (msec) : 2=0.21%, 50=49.48% 00:32:15.382 cpu : usr=92.29%, sys=7.44%, ctx=17, majf=0, minf=0 00:32:15.382 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.382 issued rwts: total=1932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.382 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:15.382 00:32:15.382 Run status group 0 (all jobs): 00:32:15.382 READ: bw=771KiB/s (790kB/s), 771KiB/s-771KiB/s (790kB/s-790kB/s), io=7728KiB (7913kB), run=10017-10017msec 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.382 00:32:15.382 real 0m11.186s 00:32:15.382 user 0m16.256s 00:32:15.382 sys 0m1.033s 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:15.382 ************************************ 00:32:15.382 END TEST fio_dif_1_default 00:32:15.382 ************************************ 00:32:15.382 09:43:26 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:15.382 09:43:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:15.382 09:43:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.382 09:43:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:15.382 ************************************ 00:32:15.382 START TEST fio_dif_1_multi_subsystems 00:32:15.382 ************************************ 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:15.382 bdev_null0 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:15.382 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:15.383 [2024-12-13 09:43:26.415417] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:15.383 bdev_null1 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:15.383 { 00:32:15.383 "params": { 00:32:15.383 "name": "Nvme$subsystem", 00:32:15.383 "trtype": "$TEST_TRANSPORT", 00:32:15.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:15.383 "adrfam": "ipv4", 00:32:15.383 "trsvcid": "$NVMF_PORT", 00:32:15.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:15.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:15.383 "hdgst": ${hdgst:-false}, 00:32:15.383 "ddgst": ${ddgst:-false} 00:32:15.383 }, 00:32:15.383 "method": "bdev_nvme_attach_controller" 00:32:15.383 } 00:32:15.383 EOF 00:32:15.383 )") 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:15.383 { 00:32:15.383 "params": { 00:32:15.383 "name": "Nvme$subsystem", 00:32:15.383 "trtype": "$TEST_TRANSPORT", 00:32:15.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:15.383 "adrfam": "ipv4", 00:32:15.383 "trsvcid": "$NVMF_PORT", 00:32:15.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:15.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:15.383 "hdgst": ${hdgst:-false}, 00:32:15.383 "ddgst": ${ddgst:-false} 00:32:15.383 }, 00:32:15.383 "method": "bdev_nvme_attach_controller" 00:32:15.383 } 00:32:15.383 EOF 00:32:15.383 )") 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:15.383 "params": { 00:32:15.383 "name": "Nvme0", 00:32:15.383 "trtype": "tcp", 00:32:15.383 "traddr": "10.0.0.2", 00:32:15.383 "adrfam": "ipv4", 00:32:15.383 "trsvcid": "4420", 00:32:15.383 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:15.383 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:15.383 "hdgst": false, 00:32:15.383 "ddgst": false 00:32:15.383 }, 00:32:15.383 "method": "bdev_nvme_attach_controller" 00:32:15.383 },{ 00:32:15.383 "params": { 00:32:15.383 "name": "Nvme1", 00:32:15.383 "trtype": "tcp", 00:32:15.383 "traddr": "10.0.0.2", 00:32:15.383 "adrfam": "ipv4", 00:32:15.383 "trsvcid": "4420", 00:32:15.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:15.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:15.383 "hdgst": false, 00:32:15.383 "ddgst": false 00:32:15.383 }, 00:32:15.383 "method": "bdev_nvme_attach_controller" 00:32:15.383 }' 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:15.383 09:43:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:15.383 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:15.384 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:15.384 fio-3.35 00:32:15.384 Starting 2 threads 00:32:25.367 00:32:25.367 filename0: (groupid=0, jobs=1): err= 0: pid=3575419: Fri Dec 13 09:43:37 2024 00:32:25.367 read: IOPS=190, BW=763KiB/s (782kB/s)(7664KiB/10038msec) 00:32:25.367 slat (nsec): min=5951, max=57468, avg=7359.03, stdev=3087.70 00:32:25.367 clat (usec): min=403, max=42597, avg=20934.63, stdev=20496.90 00:32:25.367 lat (usec): min=409, max=42604, avg=20941.99, stdev=20496.10 00:32:25.367 clat percentiles (usec): 00:32:25.367 | 1.00th=[ 416], 5.00th=[ 424], 10.00th=[ 433], 20.00th=[ 453], 00:32:25.367 | 30.00th=[ 465], 40.00th=[ 553], 50.00th=[ 1319], 60.00th=[41157], 00:32:25.367 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:32:25.367 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:32:25.367 | 99.99th=[42730] 00:32:25.367 bw ( KiB/s): min= 704, max= 768, per=66.30%, avg=764.80, stdev=14.31, samples=20 00:32:25.367 iops : min= 176, max= 192, avg=191.20, stdev= 3.58, samples=20 00:32:25.367 lat (usec) : 500=37.68%, 750=11.80%, 1000=0.42% 00:32:25.367 lat (msec) : 2=0.21%, 50=49.90% 00:32:25.367 cpu : usr=96.78%, sys=2.96%, ctx=35, majf=0, minf=109 00:32:25.367 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.367 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.367 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:25.367 filename1: (groupid=0, jobs=1): err= 0: pid=3575420: Fri Dec 13 09:43:37 2024 00:32:25.367 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10015msec) 00:32:25.367 slat (nsec): min=5965, max=31846, avg=7981.34, stdev=3744.25 00:32:25.367 clat (usec): min=40781, max=42769, avg=41020.67, stdev=224.37 00:32:25.367 lat (usec): min=40787, max=42796, avg=41028.65, stdev=224.99 00:32:25.367 clat percentiles (usec): 00:32:25.367 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:25.367 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:25.367 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:25.367 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:32:25.367 | 99.99th=[42730] 00:32:25.367 bw ( KiB/s): min= 384, max= 416, per=33.67%, avg=388.80, stdev=11.72, samples=20 00:32:25.367 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:32:25.367 lat (msec) : 50=100.00% 00:32:25.367 cpu : usr=96.63%, sys=3.12%, ctx=13, majf=0, minf=37 00:32:25.367 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.367 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.367 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:25.367 00:32:25.367 Run status group 0 (all jobs): 00:32:25.367 READ: bw=1152KiB/s (1180kB/s), 390KiB/s-763KiB/s (399kB/s-782kB/s), io=11.3MiB (11.8MB), run=10015-10038msec 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.627 00:32:25.627 real 0m11.408s 00:32:25.627 user 0m26.054s 00:32:25.627 sys 0m0.929s 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.627 09:43:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:25.627 ************************************ 00:32:25.627 END TEST fio_dif_1_multi_subsystems 00:32:25.627 ************************************ 00:32:25.627 09:43:37 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:25.627 09:43:37 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:25.627 09:43:37 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.627 09:43:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:25.627 ************************************ 00:32:25.627 START TEST fio_dif_rand_params 00:32:25.627 ************************************ 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.627 bdev_null0 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.627 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.628 [2024-12-13 09:43:37.885442] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:25.628 { 00:32:25.628 "params": { 00:32:25.628 "name": "Nvme$subsystem", 00:32:25.628 "trtype": "$TEST_TRANSPORT", 00:32:25.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.628 "adrfam": "ipv4", 00:32:25.628 "trsvcid": "$NVMF_PORT", 00:32:25.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.628 "hdgst": ${hdgst:-false}, 00:32:25.628 "ddgst": ${ddgst:-false} 00:32:25.628 }, 00:32:25.628 "method": "bdev_nvme_attach_controller" 00:32:25.628 } 00:32:25.628 EOF 00:32:25.628 )") 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:25.628 "params": { 00:32:25.628 "name": "Nvme0", 00:32:25.628 "trtype": "tcp", 00:32:25.628 "traddr": "10.0.0.2", 00:32:25.628 "adrfam": "ipv4", 00:32:25.628 "trsvcid": "4420", 00:32:25.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:25.628 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:25.628 "hdgst": false, 00:32:25.628 "ddgst": false 00:32:25.628 }, 00:32:25.628 "method": "bdev_nvme_attach_controller" 00:32:25.628 }' 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:25.628 09:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.886 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:25.886 ... 00:32:25.886 fio-3.35 00:32:25.886 Starting 3 threads 00:32:32.455 00:32:32.455 filename0: (groupid=0, jobs=1): err= 0: pid=3577315: Fri Dec 13 09:43:43 2024 00:32:32.455 read: IOPS=314, BW=39.3MiB/s (41.2MB/s)(199MiB/5046msec) 00:32:32.455 slat (nsec): min=6196, max=32776, avg=11136.65, stdev=2222.77 00:32:32.455 clat (usec): min=3368, max=50050, avg=9491.30, stdev=4996.49 00:32:32.455 lat (usec): min=3377, max=50062, avg=9502.44, stdev=4996.48 00:32:32.455 clat percentiles (usec): 00:32:32.455 | 1.00th=[ 4047], 5.00th=[ 5800], 10.00th=[ 6718], 20.00th=[ 7898], 00:32:32.455 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9503], 00:32:32.455 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10683], 95.00th=[11207], 00:32:32.455 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:32:32.455 | 99.99th=[50070] 00:32:32.455 bw ( KiB/s): min=36352, max=44032, per=34.72%, avg=40601.60, stdev=2290.37, samples=10 00:32:32.455 iops : min= 284, max= 344, avg=317.20, stdev=17.89, samples=10 00:32:32.455 lat (msec) : 4=0.44%, 10=73.30%, 20=24.81%, 50=1.39%, 100=0.06% 00:32:32.455 cpu : usr=94.23%, sys=5.47%, ctx=8, majf=0, minf=60 00:32:32.455 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:32.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.455 issued rwts: total=1588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:32.455 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:32.455 filename0: (groupid=0, jobs=1): err= 0: pid=3577316: Fri Dec 13 09:43:43 2024 00:32:32.455 read: IOPS=292, BW=36.6MiB/s (38.3MB/s)(183MiB/5004msec) 00:32:32.455 slat (nsec): min=6177, max=32944, avg=11417.14, stdev=2147.82 00:32:32.456 clat (usec): min=3427, max=55639, avg=10238.45, stdev=5909.15 00:32:32.456 lat (usec): min=3433, max=55665, avg=10249.87, stdev=5909.21 00:32:32.456 clat percentiles (usec): 00:32:32.456 | 1.00th=[ 3949], 5.00th=[ 6652], 10.00th=[ 7767], 20.00th=[ 8455], 00:32:32.456 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[ 9896], 00:32:32.456 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11207], 95.00th=[11731], 00:32:32.456 | 99.00th=[49546], 99.50th=[51119], 99.90th=[54264], 99.95th=[55837], 00:32:32.456 | 99.99th=[55837] 00:32:32.456 bw ( KiB/s): min=34048, max=40960, per=32.01%, avg=37427.20, stdev=2624.05, samples=10 00:32:32.456 iops : min= 266, max= 320, avg=292.40, stdev=20.50, samples=10 00:32:32.456 lat (msec) : 4=1.16%, 10=60.86%, 20=35.93%, 50=1.37%, 100=0.68% 00:32:32.456 cpu : usr=93.92%, sys=5.78%, ctx=13, majf=0, minf=19 00:32:32.456 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:32.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.456 issued rwts: total=1464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:32.456 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:32.456 filename0: (groupid=0, jobs=1): err= 0: pid=3577317: Fri Dec 13 09:43:43 2024 00:32:32.456 read: IOPS=308, BW=38.6MiB/s (40.5MB/s)(195MiB/5044msec) 00:32:32.456 slat (nsec): min=6118, max=33483, avg=11442.47, stdev=2168.35 00:32:32.456 clat (usec): min=3585, max=55517, avg=9671.09, stdev=4503.34 00:32:32.456 lat (usec): min=3592, max=55543, avg=9682.53, stdev=4503.58 00:32:32.456 clat percentiles (usec): 00:32:32.456 | 1.00th=[ 3851], 5.00th=[ 5997], 10.00th=[ 6915], 20.00th=[ 8094], 00:32:32.456 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:32:32.456 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11338], 95.00th=[11994], 00:32:32.456 | 99.00th=[45351], 99.50th=[49546], 99.90th=[52167], 99.95th=[55313], 00:32:32.456 | 99.99th=[55313] 00:32:32.456 bw ( KiB/s): min=33792, max=44544, per=34.06%, avg=39833.60, stdev=3117.18, samples=10 00:32:32.456 iops : min= 264, max= 348, avg=311.20, stdev=24.35, samples=10 00:32:32.456 lat (msec) : 4=2.50%, 10=62.39%, 20=34.02%, 50=0.71%, 100=0.39% 00:32:32.456 cpu : usr=93.54%, sys=6.15%, ctx=9, majf=0, minf=66 00:32:32.456 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:32.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.456 issued rwts: total=1558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:32.456 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:32.456 00:32:32.456 Run status group 0 (all jobs): 00:32:32.456 READ: bw=114MiB/s (120MB/s), 36.6MiB/s-39.3MiB/s (38.3MB/s-41.2MB/s), io=576MiB (604MB), run=5004-5046msec 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:32.456 bdev_null0 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:32.456 [2024-12-13 09:43:44.128787] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:32.456 bdev_null1 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:32.456 bdev_null2 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.456 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:32.457 { 00:32:32.457 "params": { 00:32:32.457 "name": "Nvme$subsystem", 00:32:32.457 "trtype": "$TEST_TRANSPORT", 00:32:32.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.457 "adrfam": "ipv4", 00:32:32.457 "trsvcid": "$NVMF_PORT", 00:32:32.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.457 "hdgst": ${hdgst:-false}, 00:32:32.457 "ddgst": ${ddgst:-false} 00:32:32.457 }, 00:32:32.457 "method": "bdev_nvme_attach_controller" 00:32:32.457 } 00:32:32.457 EOF 00:32:32.457 )") 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:32.457 { 00:32:32.457 "params": { 00:32:32.457 "name": "Nvme$subsystem", 00:32:32.457 "trtype": "$TEST_TRANSPORT", 00:32:32.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.457 "adrfam": "ipv4", 00:32:32.457 "trsvcid": "$NVMF_PORT", 00:32:32.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.457 "hdgst": ${hdgst:-false}, 00:32:32.457 "ddgst": ${ddgst:-false} 00:32:32.457 }, 00:32:32.457 "method": "bdev_nvme_attach_controller" 00:32:32.457 } 00:32:32.457 EOF 00:32:32.457 )") 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:32.457 { 00:32:32.457 "params": { 00:32:32.457 "name": "Nvme$subsystem", 00:32:32.457 "trtype": "$TEST_TRANSPORT", 00:32:32.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.457 "adrfam": "ipv4", 00:32:32.457 "trsvcid": "$NVMF_PORT", 00:32:32.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.457 "hdgst": ${hdgst:-false}, 00:32:32.457 "ddgst": ${ddgst:-false} 00:32:32.457 }, 00:32:32.457 "method": "bdev_nvme_attach_controller" 00:32:32.457 } 00:32:32.457 EOF 00:32:32.457 )") 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:32.457 "params": { 00:32:32.457 "name": "Nvme0", 00:32:32.457 "trtype": "tcp", 00:32:32.457 "traddr": "10.0.0.2", 00:32:32.457 "adrfam": "ipv4", 00:32:32.457 "trsvcid": "4420", 00:32:32.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:32.457 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:32.457 "hdgst": false, 00:32:32.457 "ddgst": false 00:32:32.457 }, 00:32:32.457 "method": "bdev_nvme_attach_controller" 00:32:32.457 },{ 00:32:32.457 "params": { 00:32:32.457 "name": "Nvme1", 00:32:32.457 "trtype": "tcp", 00:32:32.457 "traddr": "10.0.0.2", 00:32:32.457 "adrfam": "ipv4", 00:32:32.457 "trsvcid": "4420", 00:32:32.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:32.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:32.457 "hdgst": false, 00:32:32.457 "ddgst": false 00:32:32.457 }, 00:32:32.457 "method": "bdev_nvme_attach_controller" 00:32:32.457 },{ 00:32:32.457 "params": { 00:32:32.457 "name": "Nvme2", 00:32:32.457 "trtype": "tcp", 00:32:32.457 "traddr": "10.0.0.2", 00:32:32.457 "adrfam": "ipv4", 00:32:32.457 "trsvcid": "4420", 00:32:32.457 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:32.457 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:32.457 "hdgst": false, 00:32:32.457 "ddgst": false 00:32:32.457 }, 00:32:32.457 "method": "bdev_nvme_attach_controller" 00:32:32.457 }' 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:32.457 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:32.458 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:32.458 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:32.458 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:32.458 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:32.458 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:32.458 09:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:32.458 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:32.458 ... 00:32:32.458 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:32.458 ... 00:32:32.458 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:32.458 ... 00:32:32.458 fio-3.35 00:32:32.458 Starting 24 threads 00:32:44.689 00:32:44.689 filename0: (groupid=0, jobs=1): err= 0: pid=3578526: Fri Dec 13 09:43:55 2024 00:32:44.689 read: IOPS=605, BW=2423KiB/s (2481kB/s)(23.7MiB/10011msec) 00:32:44.689 slat (nsec): min=7313, max=87565, avg=45968.58, stdev=16921.14 00:32:44.689 clat (usec): min=7868, max=31890, avg=25996.91, stdev=2337.68 00:32:44.689 lat (usec): min=7917, max=31933, avg=26042.88, stdev=2340.49 00:32:44.689 clat percentiles (usec): 00:32:44.689 | 1.00th=[14877], 5.00th=[23987], 10.00th=[24511], 20.00th=[24511], 00:32:44.689 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:32:44.689 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[30016], 00:32:44.689 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31589], 99.95th=[31589], 00:32:44.689 | 99.99th=[31851] 00:32:44.689 bw ( KiB/s): min= 2171, max= 2816, per=4.19%, avg=2424.21, stdev=151.60, samples=19 00:32:44.689 iops : min= 542, max= 704, avg=605.89, stdev=38.04, samples=19 00:32:44.689 lat (msec) : 10=0.30%, 20=0.73%, 50=98.98% 00:32:44.689 cpu : usr=98.75%, sys=0.87%, ctx=18, majf=0, minf=9 00:32:44.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:44.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.689 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.689 filename0: (groupid=0, jobs=1): err= 0: pid=3578527: Fri Dec 13 09:43:55 2024 00:32:44.689 read: IOPS=603, BW=2412KiB/s (2470kB/s)(23.6MiB/10003msec) 00:32:44.689 slat (nsec): min=6119, max=83649, avg=14184.75, stdev=10058.07 00:32:44.689 clat (usec): min=8580, max=43472, avg=26414.29, stdev=2481.75 00:32:44.689 lat (usec): min=8592, max=43513, avg=26428.48, stdev=2480.63 00:32:44.689 clat percentiles (usec): 00:32:44.689 | 1.00th=[22414], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:32:44.689 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26608], 00:32:44.690 | 70.00th=[26870], 80.00th=[27919], 90.00th=[29492], 95.00th=[30540], 00:32:44.690 | 99.00th=[31327], 99.50th=[35390], 99.90th=[43254], 99.95th=[43254], 00:32:44.690 | 99.99th=[43254] 00:32:44.690 bw ( KiB/s): min= 2171, max= 2688, per=4.15%, avg=2405.00, stdev=132.16, samples=19 00:32:44.690 iops : min= 542, max= 672, avg=601.21, stdev=33.11, samples=19 00:32:44.690 lat (msec) : 10=0.27%, 20=0.63%, 50=99.10% 00:32:44.690 cpu : usr=98.55%, sys=0.96%, ctx=55, majf=0, minf=9 00:32:44.690 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:44.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.690 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.690 filename0: (groupid=0, jobs=1): err= 0: pid=3578528: Fri Dec 13 09:43:55 2024 00:32:44.690 read: IOPS=615, BW=2464KiB/s (2523kB/s)(24.1MiB/10011msec) 00:32:44.690 slat (nsec): min=7453, max=87780, avg=41384.71, stdev=15262.91 00:32:44.690 clat (usec): min=5730, max=35036, avg=25614.53, stdev=3272.17 00:32:44.690 lat (usec): min=5739, max=35052, avg=25655.91, stdev=3279.24 00:32:44.690 clat percentiles (usec): 00:32:44.690 | 1.00th=[11600], 5.00th=[16450], 10.00th=[24249], 20.00th=[24773], 00:32:44.690 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:32:44.690 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[30016], 00:32:44.690 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31851], 99.95th=[31851], 00:32:44.690 | 99.99th=[34866] 00:32:44.690 bw ( KiB/s): min= 2171, max= 3632, per=4.26%, avg=2467.16, stdev=305.86, samples=19 00:32:44.690 iops : min= 542, max= 908, avg=616.63, stdev=76.56, samples=19 00:32:44.690 lat (msec) : 10=0.52%, 20=5.06%, 50=94.42% 00:32:44.690 cpu : usr=98.71%, sys=0.84%, ctx=57, majf=0, minf=9 00:32:44.690 IO depths : 1=5.9%, 2=11.8%, 4=24.0%, 8=51.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:44.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.690 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.690 issued rwts: total=6166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.690 filename0: (groupid=0, jobs=1): err= 0: pid=3578529: Fri Dec 13 09:43:55 2024 00:32:44.690 read: IOPS=605, BW=2423KiB/s (2481kB/s)(23.7MiB/10011msec) 00:32:44.690 slat (nsec): min=7680, max=89545, avg=42155.30, stdev=18715.33 00:32:44.690 clat (usec): min=9136, max=31954, avg=26080.61, stdev=2329.53 00:32:44.690 lat (usec): min=9154, max=31994, avg=26122.76, stdev=2331.28 00:32:44.690 clat percentiles (usec): 00:32:44.690 | 1.00th=[14877], 5.00th=[23987], 10.00th=[24511], 20.00th=[24773], 00:32:44.690 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:32:44.690 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[30278], 00:32:44.690 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31589], 99.95th=[31851], 00:32:44.690 | 99.99th=[31851] 00:32:44.690 bw ( KiB/s): min= 2171, max= 2816, per=4.19%, avg=2424.21, stdev=151.60, samples=19 00:32:44.690 iops : min= 542, max= 704, avg=605.89, stdev=38.04, samples=19 00:32:44.690 lat (msec) : 10=0.26%, 20=0.79%, 50=98.94% 00:32:44.690 cpu : usr=98.05%, sys=1.26%, ctx=135, majf=0, minf=9 00:32:44.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:44.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.690 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.690 filename0: (groupid=0, jobs=1): err= 0: pid=3578530: Fri Dec 13 09:43:55 2024 00:32:44.690 read: IOPS=602, BW=2412KiB/s (2469kB/s)(23.6MiB/10005msec) 00:32:44.690 slat (nsec): min=6193, max=82398, avg=30431.13, stdev=16230.32 00:32:44.690 clat (usec): min=16029, max=32044, avg=26300.23, stdev=1929.15 00:32:44.690 lat (usec): min=16038, max=32060, avg=26330.66, stdev=1930.59 00:32:44.690 clat percentiles (usec): 00:32:44.690 | 1.00th=[23462], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:32:44.690 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26608], 00:32:44.690 | 70.00th=[26870], 80.00th=[27919], 90.00th=[29230], 95.00th=[30278], 00:32:44.690 | 99.00th=[30802], 99.50th=[31327], 99.90th=[31851], 99.95th=[32113], 00:32:44.690 | 99.99th=[32113] 00:32:44.690 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2411.26, stdev=115.21, samples=19 00:32:44.690 iops : min= 544, max= 640, avg=602.74, stdev=28.84, samples=19 00:32:44.690 lat (msec) : 20=0.53%, 50=99.47% 00:32:44.690 cpu : usr=98.83%, sys=0.78%, ctx=36, majf=0, minf=9 00:32:44.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:44.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.690 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.690 filename0: (groupid=0, jobs=1): err= 0: pid=3578531: Fri Dec 13 09:43:55 2024 00:32:44.690 read: IOPS=602, BW=2412KiB/s (2469kB/s)(23.6MiB/10005msec) 00:32:44.690 slat (nsec): min=6153, max=82270, avg=32628.29, stdev=18015.44 00:32:44.690 clat (usec): min=8333, max=52237, avg=26256.58, stdev=2491.50 00:32:44.690 lat (usec): min=8346, max=52250, avg=26289.21, stdev=2491.20 00:32:44.690 clat percentiles (usec): 00:32:44.690 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:32:44.690 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:32:44.690 | 70.00th=[26870], 80.00th=[27919], 90.00th=[29230], 95.00th=[30278], 00:32:44.690 | 99.00th=[31327], 99.50th=[33817], 99.90th=[44827], 99.95th=[44827], 00:32:44.690 | 99.99th=[52167] 00:32:44.690 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2404.53, stdev=110.28, samples=19 00:32:44.690 iops : min= 542, max= 640, avg=601.05, stdev=27.70, samples=19 00:32:44.690 lat (msec) : 10=0.27%, 20=0.60%, 50=99.10%, 100=0.03% 00:32:44.690 cpu : usr=98.12%, sys=1.24%, ctx=169, majf=0, minf=9 00:32:44.690 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:32:44.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.690 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.690 filename0: (groupid=0, jobs=1): err= 0: pid=3578532: Fri Dec 13 09:43:55 2024 00:32:44.690 read: IOPS=603, BW=2415KiB/s (2473kB/s)(23.7MiB/10048msec) 00:32:44.690 slat (nsec): min=4501, max=85446, avg=35346.29, stdev=20114.91 00:32:44.690 clat (usec): min=12988, max=48117, avg=26069.86, stdev=2578.14 00:32:44.690 lat (usec): min=13007, max=48149, avg=26105.20, stdev=2581.07 00:32:44.690 clat percentiles (usec): 00:32:44.690 | 1.00th=[16909], 5.00th=[23725], 10.00th=[24511], 20.00th=[24773], 00:32:44.690 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:32:44.690 | 70.00th=[26608], 80.00th=[27657], 90.00th=[29230], 95.00th=[30278], 00:32:44.690 | 99.00th=[32637], 99.50th=[35914], 99.90th=[41681], 99.95th=[47973], 00:32:44.690 | 99.99th=[47973] 00:32:44.690 bw ( KiB/s): min= 2256, max= 2608, per=4.19%, avg=2424.05, stdev=102.80, samples=20 00:32:44.690 iops : min= 564, max= 652, avg=605.90, stdev=25.80, samples=20 00:32:44.690 lat (msec) : 20=2.09%, 50=97.91% 00:32:44.690 cpu : usr=98.81%, sys=0.79%, ctx=47, majf=0, minf=9 00:32:44.690 IO depths : 1=5.5%, 2=11.3%, 4=23.3%, 8=52.7%, 16=7.3%, 32=0.0%, >=64=0.0% 00:32:44.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.690 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.690 issued rwts: total=6066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.690 filename0: (groupid=0, jobs=1): err= 0: pid=3578533: Fri Dec 13 09:43:55 2024 00:32:44.690 read: IOPS=602, BW=2412KiB/s (2470kB/s)(23.6MiB/10004msec) 00:32:44.690 slat (nsec): min=7520, max=85665, avg=33099.90, stdev=19460.10 00:32:44.690 clat (usec): min=8217, max=43548, avg=26228.79, stdev=2322.48 00:32:44.690 lat (usec): min=8232, max=43589, avg=26261.89, stdev=2324.36 00:32:44.690 clat percentiles (usec): 00:32:44.690 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:32:44.690 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[26346], 00:32:44.690 | 70.00th=[26608], 80.00th=[27919], 90.00th=[29230], 95.00th=[30278], 00:32:44.690 | 99.00th=[31065], 99.50th=[31327], 99.90th=[43254], 99.95th=[43254], 00:32:44.690 | 99.99th=[43779] 00:32:44.690 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2404.74, stdev=110.34, samples=19 00:32:44.690 iops : min= 542, max= 640, avg=601.11, stdev=27.71, samples=19 00:32:44.690 lat (msec) : 10=0.27%, 20=0.30%, 50=99.44% 00:32:44.690 cpu : usr=98.07%, sys=1.27%, ctx=134, majf=0, minf=9 00:32:44.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:44.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.690 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.690 filename1: (groupid=0, jobs=1): err= 0: pid=3578534: Fri Dec 13 09:43:55 2024 00:32:44.690 read: IOPS=604, BW=2416KiB/s (2474kB/s)(23.6MiB/10012msec) 00:32:44.690 slat (nsec): min=6387, max=86791, avg=37919.49, stdev=16747.81 00:32:44.691 clat (usec): min=12368, max=31965, avg=26141.13, stdev=2031.66 00:32:44.691 lat (usec): min=12399, max=31985, avg=26179.05, stdev=2035.05 00:32:44.691 clat percentiles (usec): 00:32:44.691 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:32:44.691 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:32:44.691 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[30016], 00:32:44.691 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31851], 99.95th=[31851], 00:32:44.691 | 99.99th=[31851] 00:32:44.691 bw ( KiB/s): min= 2171, max= 2688, per=4.16%, avg=2410.95, stdev=143.53, samples=19 00:32:44.691 iops : min= 542, max= 672, avg=602.63, stdev=35.96, samples=19 00:32:44.691 lat (msec) : 20=0.79%, 50=99.21% 00:32:44.691 cpu : usr=98.69%, sys=0.91%, ctx=27, majf=0, minf=9 00:32:44.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:44.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.691 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.691 filename1: (groupid=0, jobs=1): err= 0: pid=3578535: Fri Dec 13 09:43:55 2024 00:32:44.691 read: IOPS=603, BW=2412KiB/s (2470kB/s)(23.6MiB/10002msec) 00:32:44.691 slat (nsec): min=6338, max=97605, avg=41101.44, stdev=20562.44 00:32:44.691 clat (usec): min=17041, max=31920, avg=26137.36, stdev=1901.53 00:32:44.691 lat (usec): min=17057, max=31944, avg=26178.47, stdev=1905.89 00:32:44.691 clat percentiles (usec): 00:32:44.691 | 1.00th=[23462], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:32:44.691 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:32:44.691 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[30016], 00:32:44.691 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31589], 99.95th=[31851], 00:32:44.691 | 99.99th=[31851] 00:32:44.691 bw ( KiB/s): min= 2171, max= 2560, per=4.17%, avg=2411.26, stdev=130.60, samples=19 00:32:44.691 iops : min= 542, max= 640, avg=602.74, stdev=32.76, samples=19 00:32:44.691 lat (msec) : 20=0.53%, 50=99.47% 00:32:44.691 cpu : usr=98.73%, sys=0.88%, ctx=20, majf=0, minf=9 00:32:44.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:44.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.691 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.691 filename1: (groupid=0, jobs=1): err= 0: pid=3578536: Fri Dec 13 09:43:55 2024 00:32:44.691 read: IOPS=605, BW=2423KiB/s (2481kB/s)(23.7MiB/10011msec) 00:32:44.691 slat (nsec): min=6128, max=78599, avg=16505.11, stdev=12474.08 00:32:44.691 clat (usec): min=9183, max=32052, avg=26286.38, stdev=2347.20 00:32:44.691 lat (usec): min=9202, max=32068, avg=26302.88, stdev=2346.68 00:32:44.691 clat percentiles (usec): 00:32:44.691 | 1.00th=[14877], 5.00th=[24249], 10.00th=[24511], 20.00th=[25035], 00:32:44.691 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26608], 00:32:44.691 | 70.00th=[26870], 80.00th=[28181], 90.00th=[29230], 95.00th=[30540], 00:32:44.691 | 99.00th=[31065], 99.50th=[31327], 99.90th=[32113], 99.95th=[32113], 00:32:44.691 | 99.99th=[32113] 00:32:44.691 bw ( KiB/s): min= 2171, max= 2816, per=4.19%, avg=2424.21, stdev=151.60, samples=19 00:32:44.691 iops : min= 542, max= 704, avg=605.89, stdev=38.04, samples=19 00:32:44.691 lat (msec) : 10=0.26%, 20=0.79%, 50=98.94% 00:32:44.691 cpu : usr=98.47%, sys=1.06%, ctx=47, majf=0, minf=9 00:32:44.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:44.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.691 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.691 filename1: (groupid=0, jobs=1): err= 0: pid=3578537: Fri Dec 13 09:43:55 2024 00:32:44.691 read: IOPS=605, BW=2423KiB/s (2481kB/s)(23.7MiB/10012msec) 00:32:44.691 slat (nsec): min=8285, max=86225, avg=44824.43, stdev=17201.34 00:32:44.691 clat (usec): min=9189, max=32594, avg=26044.86, stdev=2343.36 00:32:44.691 lat (usec): min=9212, max=32644, avg=26089.68, stdev=2345.14 00:32:44.691 clat percentiles (usec): 00:32:44.691 | 1.00th=[14877], 5.00th=[23987], 10.00th=[24511], 20.00th=[24773], 00:32:44.691 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:32:44.691 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[30278], 00:32:44.691 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31851], 99.95th=[31851], 00:32:44.691 | 99.99th=[32637] 00:32:44.691 bw ( KiB/s): min= 2171, max= 2816, per=4.19%, avg=2424.21, stdev=151.60, samples=19 00:32:44.691 iops : min= 542, max= 704, avg=605.89, stdev=38.04, samples=19 00:32:44.691 lat (msec) : 10=0.26%, 20=0.82%, 50=98.91% 00:32:44.691 cpu : usr=97.70%, sys=1.48%, ctx=173, majf=0, minf=9 00:32:44.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:44.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.691 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.691 filename1: (groupid=0, jobs=1): err= 0: pid=3578538: Fri Dec 13 09:43:55 2024 00:32:44.691 read: IOPS=602, BW=2411KiB/s (2469kB/s)(23.6MiB/10007msec) 00:32:44.691 slat (nsec): min=7731, max=91060, avg=35775.32, stdev=19402.37 00:32:44.691 clat (usec): min=8205, max=54490, avg=26220.15, stdev=2442.00 00:32:44.691 lat (usec): min=8221, max=54507, avg=26255.92, stdev=2442.45 00:32:44.691 clat percentiles (usec): 00:32:44.691 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:32:44.691 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[26346], 00:32:44.691 | 70.00th=[26608], 80.00th=[27919], 90.00th=[29230], 95.00th=[30278], 00:32:44.691 | 99.00th=[31065], 99.50th=[31327], 99.90th=[46924], 99.95th=[46924], 00:32:44.691 | 99.99th=[54264] 00:32:44.691 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2404.26, stdev=109.90, samples=19 00:32:44.691 iops : min= 544, max= 640, avg=600.95, stdev=27.55, samples=19 00:32:44.691 lat (msec) : 10=0.27%, 20=0.33%, 50=99.37%, 100=0.03% 00:32:44.691 cpu : usr=98.88%, sys=0.75%, ctx=14, majf=0, minf=9 00:32:44.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:44.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.691 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.691 filename1: (groupid=0, jobs=1): err= 0: pid=3578539: Fri Dec 13 09:43:55 2024 00:32:44.691 read: IOPS=605, BW=2423KiB/s (2481kB/s)(23.7MiB/10011msec) 00:32:44.691 slat (nsec): min=7222, max=88396, avg=44846.73, stdev=17652.17 00:32:44.691 clat (usec): min=8036, max=31926, avg=25990.97, stdev=2323.32 00:32:44.691 lat (usec): min=8051, max=31983, avg=26035.82, stdev=2326.84 00:32:44.691 clat percentiles (usec): 00:32:44.691 | 1.00th=[16712], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:32:44.691 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:32:44.691 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[30016], 00:32:44.691 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31589], 99.95th=[31851], 00:32:44.691 | 99.99th=[31851] 00:32:44.691 bw ( KiB/s): min= 2171, max= 2816, per=4.19%, avg=2424.21, stdev=151.60, samples=19 00:32:44.691 iops : min= 542, max= 704, avg=605.89, stdev=38.04, samples=19 00:32:44.691 lat (msec) : 10=0.26%, 20=0.86%, 50=98.88% 00:32:44.691 cpu : usr=98.04%, sys=1.19%, ctx=106, majf=0, minf=9 00:32:44.691 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:44.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.691 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.691 filename1: (groupid=0, jobs=1): err= 0: pid=3578540: Fri Dec 13 09:43:55 2024 00:32:44.691 read: IOPS=603, BW=2413KiB/s (2470kB/s)(23.6MiB/10001msec) 00:32:44.691 slat (nsec): min=7098, max=96559, avg=40222.86, stdev=21170.86 00:32:44.691 clat (usec): min=14362, max=31806, avg=26203.32, stdev=1965.11 00:32:44.691 lat (usec): min=14370, max=31840, avg=26243.54, stdev=1967.83 00:32:44.691 clat percentiles (usec): 00:32:44.691 | 1.00th=[23462], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:32:44.691 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[26346], 00:32:44.691 | 70.00th=[26608], 80.00th=[27919], 90.00th=[29230], 95.00th=[30016], 00:32:44.691 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31589], 99.95th=[31851], 00:32:44.691 | 99.99th=[31851] 00:32:44.691 bw ( KiB/s): min= 2171, max= 2560, per=4.18%, avg=2417.42, stdev=128.20, samples=19 00:32:44.691 iops : min= 542, max= 640, avg=604.21, stdev=32.22, samples=19 00:32:44.691 lat (msec) : 20=0.53%, 50=99.47% 00:32:44.691 cpu : usr=98.75%, sys=0.81%, ctx=72, majf=0, minf=9 00:32:44.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:44.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.692 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.692 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.692 filename1: (groupid=0, jobs=1): err= 0: pid=3578541: Fri Dec 13 09:43:55 2024 00:32:44.692 read: IOPS=602, BW=2412KiB/s (2469kB/s)(23.6MiB/10005msec) 00:32:44.692 slat (nsec): min=7136, max=97511, avg=41581.44, stdev=20114.84 00:32:44.692 clat (usec): min=17024, max=36939, avg=26155.49, stdev=1926.11 00:32:44.692 lat (usec): min=17046, max=36962, avg=26197.07, stdev=1930.28 00:32:44.692 clat percentiles (usec): 00:32:44.692 | 1.00th=[23462], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:32:44.692 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:32:44.692 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[30016], 00:32:44.692 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31851], 99.95th=[31851], 00:32:44.692 | 99.99th=[36963] 00:32:44.692 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2411.26, stdev=115.21, samples=19 00:32:44.692 iops : min= 544, max= 640, avg=602.74, stdev=28.84, samples=19 00:32:44.692 lat (msec) : 20=0.56%, 50=99.44% 00:32:44.692 cpu : usr=98.21%, sys=1.08%, ctx=58, majf=0, minf=9 00:32:44.692 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:44.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.692 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.692 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.692 filename2: (groupid=0, jobs=1): err= 0: pid=3578542: Fri Dec 13 09:43:55 2024 00:32:44.692 read: IOPS=604, BW=2418KiB/s (2476kB/s)(23.6MiB/10007msec) 00:32:44.692 slat (nsec): min=4271, max=79743, avg=34574.65, stdev=16649.46 00:32:44.692 clat (usec): min=11965, max=54125, avg=26161.18, stdev=2819.08 00:32:44.692 lat (usec): min=11977, max=54138, avg=26195.75, stdev=2821.70 00:32:44.692 clat percentiles (usec): 00:32:44.692 | 1.00th=[16909], 5.00th=[23725], 10.00th=[24511], 20.00th=[24773], 00:32:44.692 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:32:44.692 | 70.00th=[26870], 80.00th=[27919], 90.00th=[29492], 95.00th=[30540], 00:32:44.692 | 99.00th=[34341], 99.50th=[36439], 99.90th=[46924], 99.95th=[46924], 00:32:44.692 | 99.99th=[54264] 00:32:44.692 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2411.84, stdev=107.30, samples=19 00:32:44.692 iops : min= 544, max= 640, avg=602.84, stdev=26.95, samples=19 00:32:44.692 lat (msec) : 20=2.26%, 50=97.70%, 100=0.03% 00:32:44.692 cpu : usr=98.58%, sys=0.87%, ctx=52, majf=0, minf=9 00:32:44.692 IO depths : 1=5.0%, 2=10.8%, 4=23.2%, 8=53.2%, 16=7.8%, 32=0.0%, >=64=0.0% 00:32:44.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.692 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.692 issued rwts: total=6050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.692 filename2: (groupid=0, jobs=1): err= 0: pid=3578543: Fri Dec 13 09:43:55 2024 00:32:44.692 read: IOPS=602, BW=2412KiB/s (2469kB/s)(23.6MiB/10005msec) 00:32:44.692 slat (nsec): min=6901, max=97398, avg=42742.27, stdev=20145.31 00:32:44.692 clat (usec): min=15770, max=32473, avg=26147.22, stdev=1967.08 00:32:44.692 lat (usec): min=15781, max=32493, avg=26189.96, stdev=1971.39 00:32:44.692 clat percentiles (usec): 00:32:44.692 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:32:44.692 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:32:44.692 | 70.00th=[26608], 80.00th=[27919], 90.00th=[29230], 95.00th=[30016], 00:32:44.692 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31851], 99.95th=[32113], 00:32:44.692 | 99.99th=[32375] 00:32:44.692 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2411.26, stdev=115.21, samples=19 00:32:44.692 iops : min= 544, max= 640, avg=602.74, stdev=28.84, samples=19 00:32:44.692 lat (msec) : 20=0.63%, 50=99.37% 00:32:44.692 cpu : usr=97.42%, sys=1.50%, ctx=321, majf=0, minf=9 00:32:44.692 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:44.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.692 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.692 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.692 filename2: (groupid=0, jobs=1): err= 0: pid=3578544: Fri Dec 13 09:43:55 2024 00:32:44.692 read: IOPS=605, BW=2423KiB/s (2481kB/s)(23.7MiB/10012msec) 00:32:44.692 slat (nsec): min=7666, max=87186, avg=43395.88, stdev=14902.20 00:32:44.692 clat (usec): min=8323, max=33197, avg=26025.88, stdev=2339.33 00:32:44.692 lat (usec): min=8334, max=33211, avg=26069.28, stdev=2342.43 00:32:44.692 clat percentiles (usec): 00:32:44.692 | 1.00th=[14877], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:32:44.692 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:32:44.692 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[30016], 00:32:44.692 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31851], 99.95th=[31851], 00:32:44.692 | 99.99th=[33162] 00:32:44.692 bw ( KiB/s): min= 2171, max= 2816, per=4.19%, avg=2424.21, stdev=151.60, samples=19 00:32:44.692 iops : min= 542, max= 704, avg=605.89, stdev=38.04, samples=19 00:32:44.692 lat (msec) : 10=0.30%, 20=0.79%, 50=98.91% 00:32:44.692 cpu : usr=98.61%, sys=1.01%, ctx=16, majf=0, minf=9 00:32:44.692 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:44.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.692 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.692 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.692 filename2: (groupid=0, jobs=1): err= 0: pid=3578545: Fri Dec 13 09:43:55 2024 00:32:44.692 read: IOPS=621, BW=2486KiB/s (2546kB/s)(24.3MiB/10007msec) 00:32:44.692 slat (nsec): min=6178, max=79840, avg=22247.93, stdev=15452.63 00:32:44.692 clat (usec): min=6857, max=47254, avg=25569.24, stdev=3623.19 00:32:44.692 lat (usec): min=6864, max=47274, avg=25591.49, stdev=3624.88 00:32:44.692 clat percentiles (usec): 00:32:44.692 | 1.00th=[16450], 5.00th=[19530], 10.00th=[21365], 20.00th=[23987], 00:32:44.692 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25560], 00:32:44.692 | 70.00th=[26608], 80.00th=[28181], 90.00th=[29754], 95.00th=[30802], 00:32:44.692 | 99.00th=[35914], 99.50th=[38536], 99.90th=[46924], 99.95th=[47449], 00:32:44.692 | 99.99th=[47449] 00:32:44.692 bw ( KiB/s): min= 2104, max= 2672, per=4.27%, avg=2474.95, stdev=130.76, samples=19 00:32:44.692 iops : min= 526, max= 668, avg=618.63, stdev=32.73, samples=19 00:32:44.692 lat (msec) : 10=0.10%, 20=5.19%, 50=94.71% 00:32:44.692 cpu : usr=98.61%, sys=0.86%, ctx=69, majf=0, minf=12 00:32:44.692 IO depths : 1=3.1%, 2=6.2%, 4=13.9%, 8=65.8%, 16=11.0%, 32=0.0%, >=64=0.0% 00:32:44.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.692 complete : 0=0.0%, 4=91.3%, 8=4.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.692 issued rwts: total=6220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.692 filename2: (groupid=0, jobs=1): err= 0: pid=3578546: Fri Dec 13 09:43:55 2024 00:32:44.692 read: IOPS=604, BW=2418KiB/s (2476kB/s)(23.6MiB/10005msec) 00:32:44.692 slat (nsec): min=6578, max=85383, avg=34735.50, stdev=20035.13 00:32:44.692 clat (usec): min=8403, max=45007, avg=26145.02, stdev=2609.18 00:32:44.692 lat (usec): min=8412, max=45020, avg=26179.75, stdev=2610.53 00:32:44.692 clat percentiles (usec): 00:32:44.692 | 1.00th=[17957], 5.00th=[23725], 10.00th=[24511], 20.00th=[24773], 00:32:44.692 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:32:44.692 | 70.00th=[26870], 80.00th=[28181], 90.00th=[29492], 95.00th=[30278], 00:32:44.692 | 99.00th=[31851], 99.50th=[36439], 99.90th=[44827], 99.95th=[44827], 00:32:44.692 | 99.99th=[44827] 00:32:44.692 bw ( KiB/s): min= 2171, max= 2560, per=4.16%, avg=2409.58, stdev=107.59, samples=19 00:32:44.692 iops : min= 542, max= 640, avg=602.32, stdev=27.03, samples=19 00:32:44.692 lat (msec) : 10=0.10%, 20=1.22%, 50=98.68% 00:32:44.692 cpu : usr=98.09%, sys=1.31%, ctx=145, majf=0, minf=9 00:32:44.692 IO depths : 1=5.6%, 2=11.4%, 4=23.3%, 8=52.5%, 16=7.2%, 32=0.0%, >=64=0.0% 00:32:44.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.692 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.692 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.692 filename2: (groupid=0, jobs=1): err= 0: pid=3578547: Fri Dec 13 09:43:55 2024 00:32:44.692 read: IOPS=602, BW=2411KiB/s (2469kB/s)(23.6MiB/10008msec) 00:32:44.692 slat (nsec): min=6300, max=95097, avg=40423.27, stdev=21614.31 00:32:44.692 clat (usec): min=12327, max=37967, avg=26140.28, stdev=2062.18 00:32:44.692 lat (usec): min=12343, max=37984, avg=26180.70, stdev=2065.95 00:32:44.692 clat percentiles (usec): 00:32:44.692 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:32:44.692 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:32:44.692 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[30016], 00:32:44.693 | 99.00th=[30802], 99.50th=[31327], 99.90th=[38011], 99.95th=[38011], 00:32:44.693 | 99.99th=[38011] 00:32:44.693 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2404.26, stdev=126.22, samples=19 00:32:44.693 iops : min= 542, max= 640, avg=600.95, stdev=31.74, samples=19 00:32:44.693 lat (msec) : 20=0.53%, 50=99.47% 00:32:44.693 cpu : usr=97.57%, sys=1.43%, ctx=154, majf=0, minf=9 00:32:44.693 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:44.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.693 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.693 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.693 filename2: (groupid=0, jobs=1): err= 0: pid=3578548: Fri Dec 13 09:43:55 2024 00:32:44.693 read: IOPS=605, BW=2423KiB/s (2481kB/s)(23.7MiB/10011msec) 00:32:44.693 slat (nsec): min=6732, max=91238, avg=39684.04, stdev=14194.81 00:32:44.693 clat (usec): min=9103, max=31953, avg=26083.15, stdev=2339.83 00:32:44.693 lat (usec): min=9120, max=31996, avg=26122.84, stdev=2341.65 00:32:44.693 clat percentiles (usec): 00:32:44.693 | 1.00th=[14877], 5.00th=[23987], 10.00th=[24511], 20.00th=[24773], 00:32:44.693 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:32:44.693 | 70.00th=[26870], 80.00th=[27919], 90.00th=[28967], 95.00th=[30278], 00:32:44.693 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31851], 99.95th=[31851], 00:32:44.693 | 99.99th=[31851] 00:32:44.693 bw ( KiB/s): min= 2171, max= 2816, per=4.19%, avg=2424.21, stdev=151.60, samples=19 00:32:44.693 iops : min= 542, max= 704, avg=605.89, stdev=38.04, samples=19 00:32:44.693 lat (msec) : 10=0.26%, 20=0.79%, 50=98.94% 00:32:44.693 cpu : usr=98.69%, sys=0.93%, ctx=25, majf=0, minf=9 00:32:44.693 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:44.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.693 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.693 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.693 filename2: (groupid=0, jobs=1): err= 0: pid=3578549: Fri Dec 13 09:43:55 2024 00:32:44.693 read: IOPS=603, BW=2412KiB/s (2470kB/s)(23.6MiB/10002msec) 00:32:44.693 slat (nsec): min=4658, max=98293, avg=42362.93, stdev=20573.66 00:32:44.693 clat (usec): min=12266, max=33111, avg=26125.33, stdev=2014.05 00:32:44.693 lat (usec): min=12290, max=33123, avg=26167.70, stdev=2017.24 00:32:44.693 clat percentiles (usec): 00:32:44.693 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:32:44.693 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:32:44.693 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[30016], 00:32:44.693 | 99.00th=[30802], 99.50th=[31327], 99.90th=[33162], 99.95th=[33162], 00:32:44.693 | 99.99th=[33162] 00:32:44.693 bw ( KiB/s): min= 2171, max= 2560, per=4.16%, avg=2411.00, stdev=137.61, samples=19 00:32:44.693 iops : min= 542, max= 640, avg=602.63, stdev=34.54, samples=19 00:32:44.693 lat (msec) : 20=0.53%, 50=99.47% 00:32:44.693 cpu : usr=98.14%, sys=1.13%, ctx=123, majf=0, minf=9 00:32:44.693 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:44.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.693 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.693 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:44.693 00:32:44.693 Run status group 0 (all jobs): 00:32:44.693 READ: bw=56.5MiB/s (59.3MB/s), 2411KiB/s-2486KiB/s (2469kB/s-2546kB/s), io=568MiB (596MB), run=10001-10048msec 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.693 bdev_null0 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.693 [2024-12-13 09:43:55.947038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:44.693 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.694 bdev_null1 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:44.694 { 00:32:44.694 "params": { 00:32:44.694 "name": "Nvme$subsystem", 00:32:44.694 "trtype": "$TEST_TRANSPORT", 00:32:44.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:44.694 "adrfam": "ipv4", 00:32:44.694 "trsvcid": "$NVMF_PORT", 00:32:44.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:44.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:44.694 "hdgst": ${hdgst:-false}, 00:32:44.694 "ddgst": ${ddgst:-false} 00:32:44.694 }, 00:32:44.694 "method": "bdev_nvme_attach_controller" 00:32:44.694 } 00:32:44.694 EOF 00:32:44.694 )") 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:44.694 { 00:32:44.694 "params": { 00:32:44.694 "name": "Nvme$subsystem", 00:32:44.694 "trtype": "$TEST_TRANSPORT", 00:32:44.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:44.694 "adrfam": "ipv4", 00:32:44.694 "trsvcid": "$NVMF_PORT", 00:32:44.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:44.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:44.694 "hdgst": ${hdgst:-false}, 00:32:44.694 "ddgst": ${ddgst:-false} 00:32:44.694 }, 00:32:44.694 "method": "bdev_nvme_attach_controller" 00:32:44.694 } 00:32:44.694 EOF 00:32:44.694 )") 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:44.694 09:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:44.694 09:43:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:44.694 "params": { 00:32:44.694 "name": "Nvme0", 00:32:44.694 "trtype": "tcp", 00:32:44.694 "traddr": "10.0.0.2", 00:32:44.694 "adrfam": "ipv4", 00:32:44.694 "trsvcid": "4420", 00:32:44.694 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:44.694 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:44.694 "hdgst": false, 00:32:44.694 "ddgst": false 00:32:44.694 }, 00:32:44.694 "method": "bdev_nvme_attach_controller" 00:32:44.694 },{ 00:32:44.694 "params": { 00:32:44.694 "name": "Nvme1", 00:32:44.694 "trtype": "tcp", 00:32:44.694 "traddr": "10.0.0.2", 00:32:44.694 "adrfam": "ipv4", 00:32:44.694 "trsvcid": "4420", 00:32:44.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:44.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:44.694 "hdgst": false, 00:32:44.694 "ddgst": false 00:32:44.694 }, 00:32:44.694 "method": "bdev_nvme_attach_controller" 00:32:44.694 }' 00:32:44.694 09:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:44.694 09:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:44.694 09:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:44.694 09:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:44.694 09:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:44.694 09:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:44.694 09:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:44.694 09:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:44.694 09:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:44.694 09:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:44.694 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:44.694 ... 00:32:44.694 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:44.694 ... 00:32:44.694 fio-3.35 00:32:44.694 Starting 4 threads 00:32:50.031 00:32:50.031 filename0: (groupid=0, jobs=1): err= 0: pid=3580453: Fri Dec 13 09:44:02 2024 00:32:50.031 read: IOPS=2805, BW=21.9MiB/s (23.0MB/s)(110MiB/5004msec) 00:32:50.031 slat (nsec): min=6143, max=41645, avg=8941.44, stdev=3137.70 00:32:50.031 clat (usec): min=666, max=5613, avg=2824.62, stdev=506.33 00:32:50.031 lat (usec): min=678, max=5624, avg=2833.56, stdev=506.13 00:32:50.031 clat percentiles (usec): 00:32:50.031 | 1.00th=[ 1696], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2474], 00:32:50.031 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2900], 00:32:50.031 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3359], 95.00th=[ 3785], 00:32:50.031 | 99.00th=[ 4555], 99.50th=[ 4752], 99.90th=[ 5014], 99.95th=[ 5080], 00:32:50.031 | 99.99th=[ 5604] 00:32:50.031 bw ( KiB/s): min=21456, max=23904, per=26.49%, avg=22452.80, stdev=733.87, samples=10 00:32:50.031 iops : min= 2682, max= 2988, avg=2806.60, stdev=91.73, samples=10 00:32:50.031 lat (usec) : 750=0.01%, 1000=0.07% 00:32:50.031 lat (msec) : 2=2.28%, 4=93.80%, 10=3.84% 00:32:50.031 cpu : usr=95.66%, sys=4.00%, ctx=10, majf=0, minf=9 00:32:50.031 IO depths : 1=0.2%, 2=6.1%, 4=65.2%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:50.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.031 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.031 issued rwts: total=14038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.031 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:50.031 filename0: (groupid=0, jobs=1): err= 0: pid=3580454: Fri Dec 13 09:44:02 2024 00:32:50.031 read: IOPS=2586, BW=20.2MiB/s (21.2MB/s)(101MiB/5002msec) 00:32:50.031 slat (nsec): min=6132, max=39030, avg=8909.35, stdev=3128.46 00:32:50.031 clat (usec): min=1001, max=6395, avg=3066.62, stdev=534.75 00:32:50.031 lat (usec): min=1012, max=6401, avg=3075.53, stdev=534.48 00:32:50.031 clat percentiles (usec): 00:32:50.031 | 1.00th=[ 2008], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2737], 00:32:50.031 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3032], 00:32:50.031 | 70.00th=[ 3163], 80.00th=[ 3326], 90.00th=[ 3720], 95.00th=[ 4228], 00:32:50.031 | 99.00th=[ 4948], 99.50th=[ 5080], 99.90th=[ 5735], 99.95th=[ 6128], 00:32:50.031 | 99.99th=[ 6390] 00:32:50.031 bw ( KiB/s): min=18768, max=21856, per=24.26%, avg=20563.56, stdev=954.17, samples=9 00:32:50.031 iops : min= 2346, max= 2732, avg=2570.44, stdev=119.27, samples=9 00:32:50.031 lat (msec) : 2=0.96%, 4=92.51%, 10=6.53% 00:32:50.031 cpu : usr=95.64%, sys=4.04%, ctx=8, majf=0, minf=9 00:32:50.031 IO depths : 1=0.1%, 2=3.4%, 4=67.9%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:50.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.031 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.031 issued rwts: total=12940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.031 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:50.031 filename1: (groupid=0, jobs=1): err= 0: pid=3580455: Fri Dec 13 09:44:02 2024 00:32:50.031 read: IOPS=2664, BW=20.8MiB/s (21.8MB/s)(104MiB/5003msec) 00:32:50.031 slat (nsec): min=6125, max=42787, avg=8933.29, stdev=3118.80 00:32:50.031 clat (usec): min=1202, max=6558, avg=2975.42, stdev=523.73 00:32:50.031 lat (usec): min=1208, max=6564, avg=2984.35, stdev=523.51 00:32:50.031 clat percentiles (usec): 00:32:50.031 | 1.00th=[ 1844], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2606], 00:32:50.031 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 2999], 00:32:50.031 | 70.00th=[ 3064], 80.00th=[ 3228], 90.00th=[ 3589], 95.00th=[ 4047], 00:32:50.031 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5342], 99.95th=[ 5669], 00:32:50.031 | 99.99th=[ 6521] 00:32:50.031 bw ( KiB/s): min=20128, max=22640, per=25.15%, avg=21315.56, stdev=872.43, samples=9 00:32:50.031 iops : min= 2516, max= 2830, avg=2664.44, stdev=109.05, samples=9 00:32:50.031 lat (msec) : 2=1.78%, 4=92.78%, 10=5.44% 00:32:50.031 cpu : usr=95.86%, sys=3.80%, ctx=10, majf=0, minf=9 00:32:50.031 IO depths : 1=0.2%, 2=4.5%, 4=67.0%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:50.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.031 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.031 issued rwts: total=13332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.031 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:50.031 filename1: (groupid=0, jobs=1): err= 0: pid=3580456: Fri Dec 13 09:44:02 2024 00:32:50.031 read: IOPS=2539, BW=19.8MiB/s (20.8MB/s)(99.2MiB/5001msec) 00:32:50.031 slat (nsec): min=6103, max=70529, avg=8985.91, stdev=3167.67 00:32:50.031 clat (usec): min=639, max=7199, avg=3125.34, stdev=505.01 00:32:50.031 lat (usec): min=650, max=7206, avg=3134.32, stdev=504.76 00:32:50.031 clat percentiles (usec): 00:32:50.031 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2671], 20.00th=[ 2802], 00:32:50.031 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:32:50.031 | 70.00th=[ 3228], 80.00th=[ 3392], 90.00th=[ 3720], 95.00th=[ 4228], 00:32:50.031 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5407], 99.95th=[ 5538], 00:32:50.031 | 99.99th=[ 7177] 00:32:50.031 bw ( KiB/s): min=18912, max=21008, per=24.01%, avg=20351.22, stdev=661.25, samples=9 00:32:50.031 iops : min= 2364, max= 2626, avg=2543.89, stdev=82.65, samples=9 00:32:50.031 lat (usec) : 750=0.01%, 1000=0.01% 00:32:50.031 lat (msec) : 2=0.40%, 4=92.75%, 10=6.84% 00:32:50.031 cpu : usr=96.04%, sys=3.64%, ctx=9, majf=0, minf=9 00:32:50.031 IO depths : 1=0.1%, 2=2.8%, 4=68.1%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:50.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.031 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.031 issued rwts: total=12698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.031 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:50.031 00:32:50.031 Run status group 0 (all jobs): 00:32:50.031 READ: bw=82.8MiB/s (86.8MB/s), 19.8MiB/s-21.9MiB/s (20.8MB/s-23.0MB/s), io=414MiB (434MB), run=5001-5004msec 00:32:50.031 09:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:50.031 09:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:50.031 09:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:50.031 09:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:50.031 09:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:50.031 09:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:50.031 09:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.031 09:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.032 00:32:50.032 real 0m24.514s 00:32:50.032 user 4m52.664s 00:32:50.032 sys 0m5.249s 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.032 09:44:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:50.032 ************************************ 00:32:50.032 END TEST fio_dif_rand_params 00:32:50.032 ************************************ 00:32:50.291 09:44:02 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:50.291 09:44:02 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:50.291 09:44:02 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.291 09:44:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:50.291 ************************************ 00:32:50.291 START TEST fio_dif_digest 00:32:50.291 ************************************ 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:50.291 bdev_null0 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.291 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:50.292 [2024-12-13 09:44:02.462700] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:50.292 { 00:32:50.292 "params": { 00:32:50.292 "name": "Nvme$subsystem", 00:32:50.292 "trtype": "$TEST_TRANSPORT", 00:32:50.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:50.292 "adrfam": "ipv4", 00:32:50.292 "trsvcid": "$NVMF_PORT", 00:32:50.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:50.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:50.292 "hdgst": ${hdgst:-false}, 00:32:50.292 "ddgst": ${ddgst:-false} 00:32:50.292 }, 00:32:50.292 "method": "bdev_nvme_attach_controller" 00:32:50.292 } 00:32:50.292 EOF 00:32:50.292 )") 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:50.292 "params": { 00:32:50.292 "name": "Nvme0", 00:32:50.292 "trtype": "tcp", 00:32:50.292 "traddr": "10.0.0.2", 00:32:50.292 "adrfam": "ipv4", 00:32:50.292 "trsvcid": "4420", 00:32:50.292 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:50.292 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:50.292 "hdgst": true, 00:32:50.292 "ddgst": true 00:32:50.292 }, 00:32:50.292 "method": "bdev_nvme_attach_controller" 00:32:50.292 }' 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:50.292 09:44:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:50.551 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:50.551 ... 00:32:50.551 fio-3.35 00:32:50.551 Starting 3 threads 00:33:02.760 00:33:02.760 filename0: (groupid=0, jobs=1): err= 0: pid=3581695: Fri Dec 13 09:44:13 2024 00:33:02.760 read: IOPS=287, BW=36.0MiB/s (37.7MB/s)(360MiB/10007msec) 00:33:02.760 slat (nsec): min=6345, max=43294, avg=17243.01, stdev=6864.02 00:33:02.760 clat (usec): min=7387, max=13111, avg=10407.30, stdev=701.04 00:33:02.760 lat (usec): min=7400, max=13124, avg=10424.54, stdev=700.68 00:33:02.760 clat percentiles (usec): 00:33:02.760 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:33:02.760 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:33:02.760 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11207], 95.00th=[11469], 00:33:02.760 | 99.00th=[11994], 99.50th=[12256], 99.90th=[13042], 99.95th=[13042], 00:33:02.760 | 99.99th=[13173] 00:33:02.760 bw ( KiB/s): min=36096, max=37632, per=35.17%, avg=36812.80, stdev=420.24, samples=20 00:33:02.761 iops : min= 282, max= 294, avg=287.60, stdev= 3.28, samples=20 00:33:02.761 lat (msec) : 10=26.85%, 20=73.15% 00:33:02.761 cpu : usr=95.64%, sys=4.04%, ctx=23, majf=0, minf=61 00:33:02.761 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.761 issued rwts: total=2879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.761 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:02.761 filename0: (groupid=0, jobs=1): err= 0: pid=3581696: Fri Dec 13 09:44:13 2024 00:33:02.761 read: IOPS=262, BW=32.8MiB/s (34.4MB/s)(330MiB/10046msec) 00:33:02.761 slat (nsec): min=6413, max=62920, avg=17947.34, stdev=7156.16 00:33:02.761 clat (usec): min=8250, max=46971, avg=11397.00, stdev=1214.23 00:33:02.761 lat (usec): min=8274, max=46999, avg=11414.95, stdev=1214.12 00:33:02.761 clat percentiles (usec): 00:33:02.761 | 1.00th=[ 9765], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:33:02.761 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:33:02.761 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12256], 95.00th=[12649], 00:33:02.761 | 99.00th=[13304], 99.50th=[13829], 99.90th=[14353], 99.95th=[45876], 00:33:02.761 | 99.99th=[46924] 00:33:02.761 bw ( KiB/s): min=33024, max=34816, per=32.21%, avg=33715.20, stdev=407.74, samples=20 00:33:02.761 iops : min= 258, max= 272, avg=263.40, stdev= 3.19, samples=20 00:33:02.761 lat (msec) : 10=2.96%, 20=96.97%, 50=0.08% 00:33:02.761 cpu : usr=95.63%, sys=4.04%, ctx=16, majf=0, minf=50 00:33:02.761 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.761 issued rwts: total=2636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.761 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:02.761 filename0: (groupid=0, jobs=1): err= 0: pid=3581697: Fri Dec 13 09:44:13 2024 00:33:02.761 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(337MiB/10044msec) 00:33:02.761 slat (nsec): min=6872, max=64416, avg=21452.77, stdev=5278.47 00:33:02.761 clat (usec): min=8780, max=51356, avg=11125.12, stdev=1268.03 00:33:02.761 lat (usec): min=8805, max=51378, avg=11146.58, stdev=1267.94 00:33:02.761 clat percentiles (usec): 00:33:02.761 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:33:02.761 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:33:02.761 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:33:02.761 | 99.00th=[13042], 99.50th=[13304], 99.90th=[15008], 99.95th=[46924], 00:33:02.761 | 99.99th=[51119] 00:33:02.761 bw ( KiB/s): min=33792, max=35328, per=32.98%, avg=34521.60, stdev=400.70, samples=20 00:33:02.761 iops : min= 264, max= 276, avg=269.70, stdev= 3.13, samples=20 00:33:02.761 lat (msec) : 10=6.41%, 20=93.52%, 50=0.04%, 100=0.04% 00:33:02.761 cpu : usr=96.05%, sys=3.17%, ctx=489, majf=0, minf=56 00:33:02.761 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.761 issued rwts: total=2699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.761 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:02.761 00:33:02.761 Run status group 0 (all jobs): 00:33:02.761 READ: bw=102MiB/s (107MB/s), 32.8MiB/s-36.0MiB/s (34.4MB/s-37.7MB/s), io=1027MiB (1077MB), run=10007-10046msec 00:33:02.761 09:44:13 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:02.761 09:44:13 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:02.761 09:44:13 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:02.761 09:44:13 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:02.761 09:44:13 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:02.761 09:44:13 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:02.761 09:44:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.761 09:44:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:02.761 09:44:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.761 09:44:13 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:02.761 09:44:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.761 09:44:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:02.761 09:44:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.761 00:33:02.761 real 0m11.267s 00:33:02.761 user 0m35.643s 00:33:02.761 sys 0m1.459s 00:33:02.761 09:44:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:02.761 09:44:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:02.761 ************************************ 00:33:02.761 END TEST fio_dif_digest 00:33:02.761 ************************************ 00:33:02.761 09:44:13 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:02.761 09:44:13 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:02.761 09:44:13 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:02.761 09:44:13 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:33:02.761 09:44:13 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:02.761 09:44:13 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:33:02.761 09:44:13 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:02.761 09:44:13 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:02.761 rmmod nvme_tcp 00:33:02.761 rmmod nvme_fabrics 00:33:02.761 rmmod nvme_keyring 00:33:02.761 09:44:13 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:02.761 09:44:13 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:33:02.761 09:44:13 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:33:02.761 09:44:13 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3573127 ']' 00:33:02.761 09:44:13 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3573127 00:33:02.761 09:44:13 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3573127 ']' 00:33:02.761 09:44:13 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3573127 00:33:02.761 09:44:13 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:33:02.761 09:44:13 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:02.761 09:44:13 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3573127 00:33:02.761 09:44:13 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:02.761 09:44:13 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:02.761 09:44:13 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3573127' 00:33:02.761 killing process with pid 3573127 00:33:02.761 09:44:13 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3573127 00:33:02.761 09:44:13 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3573127 00:33:02.761 09:44:14 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:02.761 09:44:14 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:03.745 Waiting for block devices as requested 00:33:04.004 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:04.004 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:04.004 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:04.263 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:04.263 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:04.263 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:04.263 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:04.522 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:04.522 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:04.522 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:04.522 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:04.780 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:04.780 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:04.780 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:05.039 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:05.039 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:05.039 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:05.039 09:44:17 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:05.039 09:44:17 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:05.039 09:44:17 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:33:05.039 09:44:17 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:33:05.039 09:44:17 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:05.039 09:44:17 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:33:05.039 09:44:17 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:05.039 09:44:17 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:05.039 09:44:17 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.039 09:44:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:05.039 09:44:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.575 09:44:19 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:07.575 00:33:07.575 real 1m13.245s 00:33:07.575 user 7m9.899s 00:33:07.575 sys 0m19.833s 00:33:07.575 09:44:19 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:07.575 09:44:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:07.575 ************************************ 00:33:07.575 END TEST nvmf_dif 00:33:07.575 ************************************ 00:33:07.575 09:44:19 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:07.575 09:44:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:07.575 09:44:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:07.575 09:44:19 -- common/autotest_common.sh@10 -- # set +x 00:33:07.575 ************************************ 00:33:07.575 START TEST nvmf_abort_qd_sizes 00:33:07.575 ************************************ 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:07.575 * Looking for test storage... 00:33:07.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:33:07.575 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:07.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.576 --rc genhtml_branch_coverage=1 00:33:07.576 --rc genhtml_function_coverage=1 00:33:07.576 --rc genhtml_legend=1 00:33:07.576 --rc geninfo_all_blocks=1 00:33:07.576 --rc geninfo_unexecuted_blocks=1 00:33:07.576 00:33:07.576 ' 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:07.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.576 --rc genhtml_branch_coverage=1 00:33:07.576 --rc genhtml_function_coverage=1 00:33:07.576 --rc genhtml_legend=1 00:33:07.576 --rc geninfo_all_blocks=1 00:33:07.576 --rc geninfo_unexecuted_blocks=1 00:33:07.576 00:33:07.576 ' 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:07.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.576 --rc genhtml_branch_coverage=1 00:33:07.576 --rc genhtml_function_coverage=1 00:33:07.576 --rc genhtml_legend=1 00:33:07.576 --rc geninfo_all_blocks=1 00:33:07.576 --rc geninfo_unexecuted_blocks=1 00:33:07.576 00:33:07.576 ' 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:07.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.576 --rc genhtml_branch_coverage=1 00:33:07.576 --rc genhtml_function_coverage=1 00:33:07.576 --rc genhtml_legend=1 00:33:07.576 --rc geninfo_all_blocks=1 00:33:07.576 --rc geninfo_unexecuted_blocks=1 00:33:07.576 00:33:07.576 ' 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:07.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:33:07.576 09:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:12.849 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:12.850 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:12.850 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:12.850 Found net devices under 0000:af:00.0: cvl_0_0 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:12.850 Found net devices under 0000:af:00.1: cvl_0_1 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:12.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:12.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:33:12.850 00:33:12.850 --- 10.0.0.2 ping statistics --- 00:33:12.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.850 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:12.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:12.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:33:12.850 00:33:12.850 --- 10.0.0.1 ping statistics --- 00:33:12.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.850 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:12.850 09:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:14.755 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:14.755 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:14.755 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:14.755 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:14.755 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:14.755 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:14.755 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:14.755 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:14.755 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:14.755 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:14.755 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:14.755 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:14.755 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:14.755 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:14.755 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:14.755 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:15.691 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3589241 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3589241 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3589241 ']' 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:15.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:15.691 09:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:15.951 [2024-12-13 09:44:28.098337] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:33:15.951 [2024-12-13 09:44:28.098379] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:15.951 [2024-12-13 09:44:28.164475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:15.951 [2024-12-13 09:44:28.207080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:15.951 [2024-12-13 09:44:28.207118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:15.951 [2024-12-13 09:44:28.207125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:15.951 [2024-12-13 09:44:28.207130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:15.951 [2024-12-13 09:44:28.207136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:15.951 [2024-12-13 09:44:28.208499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.951 [2024-12-13 09:44:28.208599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:15.951 [2024-12-13 09:44:28.208620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:15.951 [2024-12-13 09:44:28.208621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.951 09:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:15.951 09:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:33:15.951 09:44:28 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:15.951 09:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:15.951 09:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:16.210 09:44:28 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:16.210 09:44:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:16.210 09:44:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:16.210 09:44:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:16.210 09:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:33:16.210 09:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:33:16.210 09:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:33:16.210 09:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:16.211 09:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:16.211 09:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:33:16.211 09:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:33:16.211 09:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:16.211 09:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:16.211 09:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:33:16.211 09:44:28 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:33:16.211 09:44:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:16.211 09:44:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:33:16.211 09:44:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:16.211 09:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:16.211 09:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:16.211 09:44:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:16.211 ************************************ 00:33:16.211 START TEST spdk_target_abort 00:33:16.211 ************************************ 00:33:16.211 09:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:33:16.211 09:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:16.211 09:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:33:16.211 09:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.211 09:44:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:19.497 spdk_targetn1 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:19.497 [2024-12-13 09:44:31.224438] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:19.497 [2024-12-13 09:44:31.268729] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:19.497 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:19.498 09:44:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:22.786 Initializing NVMe Controllers 00:33:22.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:22.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:22.786 Initialization complete. Launching workers. 00:33:22.786 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15305, failed: 0 00:33:22.786 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1357, failed to submit 13948 00:33:22.786 success 740, unsuccessful 617, failed 0 00:33:22.786 09:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:22.786 09:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:26.069 Initializing NVMe Controllers 00:33:26.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:26.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:26.069 Initialization complete. Launching workers. 00:33:26.069 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8505, failed: 0 00:33:26.069 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1218, failed to submit 7287 00:33:26.069 success 320, unsuccessful 898, failed 0 00:33:26.069 09:44:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:26.069 09:44:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:28.603 Initializing NVMe Controllers 00:33:28.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:28.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:28.603 Initialization complete. Launching workers. 00:33:28.603 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38327, failed: 0 00:33:28.603 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2819, failed to submit 35508 00:33:28.603 success 588, unsuccessful 2231, failed 0 00:33:28.604 09:44:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:28.604 09:44:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.604 09:44:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:28.862 09:44:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.862 09:44:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:28.862 09:44:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.862 09:44:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3589241 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3589241 ']' 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3589241 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3589241 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3589241' 00:33:30.240 killing process with pid 3589241 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3589241 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3589241 00:33:30.240 00:33:30.240 real 0m14.085s 00:33:30.240 user 0m53.676s 00:33:30.240 sys 0m2.570s 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:30.240 ************************************ 00:33:30.240 END TEST spdk_target_abort 00:33:30.240 ************************************ 00:33:30.240 09:44:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:30.240 09:44:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:30.240 09:44:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:30.240 09:44:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:30.240 ************************************ 00:33:30.240 START TEST kernel_target_abort 00:33:30.240 ************************************ 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:30.240 09:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:32.775 Waiting for block devices as requested 00:33:32.775 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:32.775 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:32.775 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:32.775 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:32.775 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:33.033 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:33.033 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:33.033 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:33.033 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:33.291 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:33.291 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:33.291 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:33.550 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:33.550 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:33.550 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:33.809 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:33.809 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:33.809 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:33.809 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:33.809 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:33.809 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:33.809 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:33.809 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:33.809 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:33.809 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:33.809 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:33.809 No valid GPT data, bailing 00:33:33.809 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:33.809 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:33:33.809 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:33:33.810 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:33.810 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:33.810 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:33.810 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:33.810 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:33.810 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:33.810 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:33:33.810 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:33.810 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:33:33.810 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:33.810 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:33:33.810 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:33:33.810 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:33:33.810 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:34.069 00:33:34.069 Discovery Log Number of Records 2, Generation counter 2 00:33:34.069 =====Discovery Log Entry 0====== 00:33:34.069 trtype: tcp 00:33:34.069 adrfam: ipv4 00:33:34.069 subtype: current discovery subsystem 00:33:34.069 treq: not specified, sq flow control disable supported 00:33:34.069 portid: 1 00:33:34.069 trsvcid: 4420 00:33:34.069 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:34.069 traddr: 10.0.0.1 00:33:34.069 eflags: none 00:33:34.069 sectype: none 00:33:34.069 =====Discovery Log Entry 1====== 00:33:34.069 trtype: tcp 00:33:34.069 adrfam: ipv4 00:33:34.069 subtype: nvme subsystem 00:33:34.069 treq: not specified, sq flow control disable supported 00:33:34.069 portid: 1 00:33:34.069 trsvcid: 4420 00:33:34.069 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:34.069 traddr: 10.0.0.1 00:33:34.069 eflags: none 00:33:34.069 sectype: none 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:34.069 09:44:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:37.355 Initializing NVMe Controllers 00:33:37.355 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:37.355 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:37.355 Initialization complete. Launching workers. 00:33:37.355 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93823, failed: 0 00:33:37.355 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 93823, failed to submit 0 00:33:37.355 success 0, unsuccessful 93823, failed 0 00:33:37.355 09:44:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:37.355 09:44:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:40.639 Initializing NVMe Controllers 00:33:40.639 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:40.639 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:40.639 Initialization complete. Launching workers. 00:33:40.639 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146307, failed: 0 00:33:40.639 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36714, failed to submit 109593 00:33:40.639 success 0, unsuccessful 36714, failed 0 00:33:40.639 09:44:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:40.640 09:44:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:43.927 Initializing NVMe Controllers 00:33:43.927 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:43.927 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:43.927 Initialization complete. Launching workers. 00:33:43.927 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 137782, failed: 0 00:33:43.927 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34506, failed to submit 103276 00:33:43.927 success 0, unsuccessful 34506, failed 0 00:33:43.927 09:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:43.927 09:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:43.927 09:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:33:43.927 09:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:43.927 09:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:43.927 09:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:43.927 09:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:43.927 09:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:43.927 09:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:43.927 09:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:45.835 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:45.835 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:45.835 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:45.835 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:46.095 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:46.095 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:46.095 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:46.095 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:46.095 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:46.095 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:46.095 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:46.095 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:46.095 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:46.095 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:46.095 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:46.095 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:47.033 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:47.033 00:33:47.033 real 0m16.698s 00:33:47.033 user 0m8.725s 00:33:47.033 sys 0m4.569s 00:33:47.033 09:44:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:47.033 09:44:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:47.033 ************************************ 00:33:47.033 END TEST kernel_target_abort 00:33:47.033 ************************************ 00:33:47.033 09:44:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:47.033 09:44:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:47.033 09:44:59 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:47.033 09:44:59 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:33:47.033 09:44:59 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:47.033 09:44:59 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:33:47.033 09:44:59 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:47.033 09:44:59 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:47.033 rmmod nvme_tcp 00:33:47.033 rmmod nvme_fabrics 00:33:47.033 rmmod nvme_keyring 00:33:47.033 09:44:59 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:47.033 09:44:59 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:33:47.033 09:44:59 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:33:47.033 09:44:59 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3589241 ']' 00:33:47.033 09:44:59 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3589241 00:33:47.034 09:44:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3589241 ']' 00:33:47.034 09:44:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3589241 00:33:47.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3589241) - No such process 00:33:47.034 09:44:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3589241 is not found' 00:33:47.034 Process with pid 3589241 is not found 00:33:47.034 09:44:59 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:47.034 09:44:59 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:49.566 Waiting for block devices as requested 00:33:49.566 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:49.825 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:49.825 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:49.825 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:49.825 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:50.083 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:50.083 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:50.083 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:50.083 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:50.343 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:50.343 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:50.343 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:50.601 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:50.601 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:50.601 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:50.601 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:50.945 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:50.945 09:45:03 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:50.945 09:45:03 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:50.945 09:45:03 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:33:50.945 09:45:03 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:33:50.945 09:45:03 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:50.945 09:45:03 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:33:50.945 09:45:03 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:50.945 09:45:03 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:50.945 09:45:03 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.945 09:45:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:50.945 09:45:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.938 09:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:52.938 00:33:52.938 real 0m45.654s 00:33:52.938 user 1m6.001s 00:33:52.938 sys 0m14.542s 00:33:52.938 09:45:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:52.938 09:45:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:52.938 ************************************ 00:33:52.938 END TEST nvmf_abort_qd_sizes 00:33:52.938 ************************************ 00:33:52.938 09:45:05 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:52.938 09:45:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:52.938 09:45:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:52.938 09:45:05 -- common/autotest_common.sh@10 -- # set +x 00:33:52.938 ************************************ 00:33:52.938 START TEST keyring_file 00:33:52.938 ************************************ 00:33:52.938 09:45:05 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:53.198 * Looking for test storage... 00:33:53.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:53.198 09:45:05 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:53.198 09:45:05 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:33:53.198 09:45:05 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:53.198 09:45:05 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@345 -- # : 1 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@353 -- # local d=1 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@355 -- # echo 1 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@353 -- # local d=2 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@355 -- # echo 2 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:53.198 09:45:05 keyring_file -- scripts/common.sh@368 -- # return 0 00:33:53.198 09:45:05 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:53.198 09:45:05 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:53.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.198 --rc genhtml_branch_coverage=1 00:33:53.198 --rc genhtml_function_coverage=1 00:33:53.198 --rc genhtml_legend=1 00:33:53.198 --rc geninfo_all_blocks=1 00:33:53.198 --rc geninfo_unexecuted_blocks=1 00:33:53.198 00:33:53.198 ' 00:33:53.198 09:45:05 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:53.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.198 --rc genhtml_branch_coverage=1 00:33:53.198 --rc genhtml_function_coverage=1 00:33:53.198 --rc genhtml_legend=1 00:33:53.198 --rc geninfo_all_blocks=1 00:33:53.198 --rc geninfo_unexecuted_blocks=1 00:33:53.198 00:33:53.198 ' 00:33:53.198 09:45:05 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:53.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.198 --rc genhtml_branch_coverage=1 00:33:53.198 --rc genhtml_function_coverage=1 00:33:53.198 --rc genhtml_legend=1 00:33:53.198 --rc geninfo_all_blocks=1 00:33:53.198 --rc geninfo_unexecuted_blocks=1 00:33:53.198 00:33:53.198 ' 00:33:53.198 09:45:05 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:53.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.198 --rc genhtml_branch_coverage=1 00:33:53.198 --rc genhtml_function_coverage=1 00:33:53.198 --rc genhtml_legend=1 00:33:53.198 --rc geninfo_all_blocks=1 00:33:53.198 --rc geninfo_unexecuted_blocks=1 00:33:53.198 00:33:53.198 ' 00:33:53.198 09:45:05 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:53.198 09:45:05 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:53.198 09:45:05 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:53.198 09:45:05 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:53.198 09:45:05 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:53.198 09:45:05 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:53.198 09:45:05 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:53.198 09:45:05 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:53.199 09:45:05 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:33:53.199 09:45:05 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:53.199 09:45:05 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:53.199 09:45:05 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:53.199 09:45:05 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.199 09:45:05 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.199 09:45:05 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.199 09:45:05 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:53.199 09:45:05 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@51 -- # : 0 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:53.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:53.199 09:45:05 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:53.199 09:45:05 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:53.199 09:45:05 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:53.199 09:45:05 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:53.199 09:45:05 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:53.199 09:45:05 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0WoV06xhvm 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0WoV06xhvm 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0WoV06xhvm 00:33:53.199 09:45:05 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.0WoV06xhvm 00:33:53.199 09:45:05 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mkrt4ZaSbb 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:53.199 09:45:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mkrt4ZaSbb 00:33:53.199 09:45:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mkrt4ZaSbb 00:33:53.199 09:45:05 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.mkrt4ZaSbb 00:33:53.199 09:45:05 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:53.199 09:45:05 keyring_file -- keyring/file.sh@30 -- # tgtpid=3597822 00:33:53.458 09:45:05 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3597822 00:33:53.458 09:45:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3597822 ']' 00:33:53.458 09:45:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.458 09:45:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:53.458 09:45:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.458 09:45:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:53.458 09:45:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:53.458 [2024-12-13 09:45:05.599985] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:33:53.458 [2024-12-13 09:45:05.600031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597822 ] 00:33:53.458 [2024-12-13 09:45:05.663220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.458 [2024-12-13 09:45:05.703697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.717 09:45:05 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:53.717 09:45:05 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:33:53.717 09:45:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:53.717 09:45:05 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.717 09:45:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:53.717 [2024-12-13 09:45:05.917710] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.717 null0 00:33:53.717 [2024-12-13 09:45:05.949763] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:53.717 [2024-12-13 09:45:05.950042] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:53.717 09:45:05 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.717 09:45:05 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:53.717 09:45:05 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:53.717 09:45:05 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:53.717 09:45:05 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:53.718 [2024-12-13 09:45:05.973813] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:53.718 request: 00:33:53.718 { 00:33:53.718 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:53.718 "secure_channel": false, 00:33:53.718 "listen_address": { 00:33:53.718 "trtype": "tcp", 00:33:53.718 "traddr": "127.0.0.1", 00:33:53.718 "trsvcid": "4420" 00:33:53.718 }, 00:33:53.718 "method": "nvmf_subsystem_add_listener", 00:33:53.718 "req_id": 1 00:33:53.718 } 00:33:53.718 Got JSON-RPC error response 00:33:53.718 response: 00:33:53.718 { 00:33:53.718 "code": -32602, 00:33:53.718 "message": "Invalid parameters" 00:33:53.718 } 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:53.718 09:45:05 keyring_file -- keyring/file.sh@47 -- # bperfpid=3597828 00:33:53.718 09:45:05 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3597828 /var/tmp/bperf.sock 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3597828 ']' 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:53.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:53.718 09:45:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:53.718 09:45:05 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:53.718 [2024-12-13 09:45:06.026517] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:33:53.718 [2024-12-13 09:45:06.026556] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597828 ] 00:33:53.977 [2024-12-13 09:45:06.090018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.977 [2024-12-13 09:45:06.129689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.977 09:45:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:53.977 09:45:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:33:53.977 09:45:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0WoV06xhvm 00:33:53.977 09:45:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0WoV06xhvm 00:33:54.236 09:45:06 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mkrt4ZaSbb 00:33:54.236 09:45:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mkrt4ZaSbb 00:33:54.236 09:45:06 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:33:54.236 09:45:06 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:54.236 09:45:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:54.236 09:45:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:54.236 09:45:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:54.495 09:45:06 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0WoV06xhvm == \/\t\m\p\/\t\m\p\.\0\W\o\V\0\6\x\h\v\m ]] 00:33:54.495 09:45:06 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:33:54.495 09:45:06 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:33:54.495 09:45:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:54.495 09:45:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:54.495 09:45:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:54.756 09:45:06 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.mkrt4ZaSbb == \/\t\m\p\/\t\m\p\.\m\k\r\t\4\Z\a\S\b\b ]] 00:33:54.756 09:45:06 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:33:54.756 09:45:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:54.756 09:45:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:54.757 09:45:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:54.757 09:45:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:54.757 09:45:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:55.016 09:45:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:55.016 09:45:07 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:33:55.016 09:45:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:55.016 09:45:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:55.016 09:45:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:55.016 09:45:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:55.016 09:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:55.016 09:45:07 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:33:55.016 09:45:07 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:55.016 09:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:55.275 [2024-12-13 09:45:07.524805] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:55.275 nvme0n1 00:33:55.275 09:45:07 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:33:55.275 09:45:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:55.275 09:45:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:55.275 09:45:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:55.275 09:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:55.275 09:45:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:55.534 09:45:07 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:33:55.534 09:45:07 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:33:55.534 09:45:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:55.534 09:45:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:55.534 09:45:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:55.534 09:45:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:55.534 09:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:55.793 09:45:07 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:33:55.793 09:45:07 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:55.793 Running I/O for 1 seconds... 00:33:56.988 18613.00 IOPS, 72.71 MiB/s 00:33:56.988 Latency(us) 00:33:56.988 [2024-12-13T08:45:09.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.988 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:56.988 nvme0n1 : 1.00 18660.02 72.89 0.00 0.00 6846.87 3042.74 15478.98 00:33:56.988 [2024-12-13T08:45:09.354Z] =================================================================================================================== 00:33:56.988 [2024-12-13T08:45:09.354Z] Total : 18660.02 72.89 0.00 0.00 6846.87 3042.74 15478.98 00:33:56.988 { 00:33:56.988 "results": [ 00:33:56.988 { 00:33:56.988 "job": "nvme0n1", 00:33:56.988 "core_mask": "0x2", 00:33:56.988 "workload": "randrw", 00:33:56.988 "percentage": 50, 00:33:56.988 "status": "finished", 00:33:56.988 "queue_depth": 128, 00:33:56.988 "io_size": 4096, 00:33:56.988 "runtime": 1.004447, 00:33:56.988 "iops": 18660.018895969624, 00:33:56.988 "mibps": 72.89069881238134, 00:33:56.988 "io_failed": 0, 00:33:56.988 "io_timeout": 0, 00:33:56.989 "avg_latency_us": 6846.873029727924, 00:33:56.989 "min_latency_us": 3042.7428571428572, 00:33:56.989 "max_latency_us": 15478.979047619048 00:33:56.989 } 00:33:56.989 ], 00:33:56.989 "core_count": 1 00:33:56.989 } 00:33:56.989 09:45:09 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:56.989 09:45:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:56.989 09:45:09 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:33:56.989 09:45:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:56.989 09:45:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:56.989 09:45:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:56.989 09:45:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:56.989 09:45:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:57.247 09:45:09 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:57.247 09:45:09 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:33:57.247 09:45:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:57.247 09:45:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:57.247 09:45:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:57.247 09:45:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:57.247 09:45:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:57.505 09:45:09 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:33:57.505 09:45:09 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:57.506 09:45:09 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:57.506 09:45:09 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:57.506 09:45:09 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:57.506 09:45:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:57.506 09:45:09 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:57.506 09:45:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:57.506 09:45:09 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:57.506 09:45:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:57.764 [2024-12-13 09:45:09.881681] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:57.764 [2024-12-13 09:45:09.881702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12be470 (107): Transport endpoint is not connected 00:33:57.764 [2024-12-13 09:45:09.882697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12be470 (9): Bad file descriptor 00:33:57.764 [2024-12-13 09:45:09.883699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:57.764 [2024-12-13 09:45:09.883709] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:57.764 [2024-12-13 09:45:09.883717] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:57.764 [2024-12-13 09:45:09.883726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:57.764 request: 00:33:57.764 { 00:33:57.764 "name": "nvme0", 00:33:57.764 "trtype": "tcp", 00:33:57.764 "traddr": "127.0.0.1", 00:33:57.764 "adrfam": "ipv4", 00:33:57.764 "trsvcid": "4420", 00:33:57.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:57.764 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:57.764 "prchk_reftag": false, 00:33:57.764 "prchk_guard": false, 00:33:57.764 "hdgst": false, 00:33:57.764 "ddgst": false, 00:33:57.764 "psk": "key1", 00:33:57.764 "allow_unrecognized_csi": false, 00:33:57.764 "method": "bdev_nvme_attach_controller", 00:33:57.764 "req_id": 1 00:33:57.764 } 00:33:57.764 Got JSON-RPC error response 00:33:57.764 response: 00:33:57.764 { 00:33:57.764 "code": -5, 00:33:57.764 "message": "Input/output error" 00:33:57.764 } 00:33:57.764 09:45:09 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:57.764 09:45:09 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:57.764 09:45:09 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:57.764 09:45:09 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:57.764 09:45:09 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:33:57.764 09:45:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:57.764 09:45:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:57.764 09:45:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:57.764 09:45:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:57.764 09:45:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:57.764 09:45:10 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:57.764 09:45:10 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:33:57.764 09:45:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:57.764 09:45:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:57.764 09:45:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:57.764 09:45:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:57.764 09:45:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:58.022 09:45:10 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:33:58.022 09:45:10 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:33:58.022 09:45:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:58.281 09:45:10 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:33:58.281 09:45:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:58.540 09:45:10 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:33:58.540 09:45:10 keyring_file -- keyring/file.sh@78 -- # jq length 00:33:58.540 09:45:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:58.540 09:45:10 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:33:58.540 09:45:10 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.0WoV06xhvm 00:33:58.540 09:45:10 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.0WoV06xhvm 00:33:58.540 09:45:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:58.540 09:45:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.0WoV06xhvm 00:33:58.540 09:45:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:58.540 09:45:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:58.540 09:45:10 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:58.540 09:45:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:58.540 09:45:10 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0WoV06xhvm 00:33:58.540 09:45:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0WoV06xhvm 00:33:58.799 [2024-12-13 09:45:11.018737] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0WoV06xhvm': 0100660 00:33:58.799 [2024-12-13 09:45:11.018762] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:58.799 request: 00:33:58.799 { 00:33:58.799 "name": "key0", 00:33:58.799 "path": "/tmp/tmp.0WoV06xhvm", 00:33:58.799 "method": "keyring_file_add_key", 00:33:58.799 "req_id": 1 00:33:58.799 } 00:33:58.799 Got JSON-RPC error response 00:33:58.799 response: 00:33:58.799 { 00:33:58.799 "code": -1, 00:33:58.799 "message": "Operation not permitted" 00:33:58.799 } 00:33:58.799 09:45:11 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:58.799 09:45:11 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:58.799 09:45:11 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:58.799 09:45:11 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:58.799 09:45:11 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.0WoV06xhvm 00:33:58.799 09:45:11 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0WoV06xhvm 00:33:58.799 09:45:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0WoV06xhvm 00:33:59.057 09:45:11 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.0WoV06xhvm 00:33:59.057 09:45:11 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:33:59.057 09:45:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:59.057 09:45:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:59.057 09:45:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:59.057 09:45:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:59.057 09:45:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:59.057 09:45:11 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:33:59.057 09:45:11 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:59.057 09:45:11 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:59.058 09:45:11 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:59.058 09:45:11 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:59.058 09:45:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:59.058 09:45:11 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:59.058 09:45:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:59.058 09:45:11 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:59.058 09:45:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:59.317 [2024-12-13 09:45:11.580238] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.0WoV06xhvm': No such file or directory 00:33:59.317 [2024-12-13 09:45:11.580265] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:59.317 [2024-12-13 09:45:11.580280] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:59.317 [2024-12-13 09:45:11.580287] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:33:59.317 [2024-12-13 09:45:11.580294] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:59.317 [2024-12-13 09:45:11.580300] bdev_nvme.c:6802:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:59.317 request: 00:33:59.317 { 00:33:59.317 "name": "nvme0", 00:33:59.317 "trtype": "tcp", 00:33:59.317 "traddr": "127.0.0.1", 00:33:59.317 "adrfam": "ipv4", 00:33:59.317 "trsvcid": "4420", 00:33:59.317 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:59.317 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:59.317 "prchk_reftag": false, 00:33:59.317 "prchk_guard": false, 00:33:59.317 "hdgst": false, 00:33:59.317 "ddgst": false, 00:33:59.317 "psk": "key0", 00:33:59.317 "allow_unrecognized_csi": false, 00:33:59.317 "method": "bdev_nvme_attach_controller", 00:33:59.317 "req_id": 1 00:33:59.317 } 00:33:59.317 Got JSON-RPC error response 00:33:59.317 response: 00:33:59.317 { 00:33:59.317 "code": -19, 00:33:59.317 "message": "No such device" 00:33:59.317 } 00:33:59.317 09:45:11 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:59.317 09:45:11 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:59.317 09:45:11 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:59.317 09:45:11 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:59.317 09:45:11 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:33:59.317 09:45:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:59.576 09:45:11 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:59.576 09:45:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:59.576 09:45:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:59.576 09:45:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:59.576 09:45:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:59.576 09:45:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:59.576 09:45:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JpDpT3tDFm 00:33:59.576 09:45:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:59.576 09:45:11 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:59.576 09:45:11 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:59.576 09:45:11 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:59.576 09:45:11 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:59.576 09:45:11 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:59.576 09:45:11 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:59.576 09:45:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JpDpT3tDFm 00:33:59.576 09:45:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JpDpT3tDFm 00:33:59.576 09:45:11 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.JpDpT3tDFm 00:33:59.576 09:45:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JpDpT3tDFm 00:33:59.576 09:45:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JpDpT3tDFm 00:33:59.834 09:45:12 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:59.834 09:45:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:00.093 nvme0n1 00:34:00.093 09:45:12 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:34:00.093 09:45:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:00.093 09:45:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:00.093 09:45:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:00.093 09:45:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:00.093 09:45:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:00.351 09:45:12 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:34:00.351 09:45:12 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:34:00.351 09:45:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:00.351 09:45:12 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:34:00.351 09:45:12 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:34:00.352 09:45:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:00.352 09:45:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:00.352 09:45:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:00.610 09:45:12 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:34:00.610 09:45:12 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:34:00.610 09:45:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:00.610 09:45:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:00.610 09:45:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:00.610 09:45:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:00.610 09:45:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:00.868 09:45:13 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:34:00.868 09:45:13 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:00.868 09:45:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:00.868 09:45:13 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:34:00.868 09:45:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:00.868 09:45:13 keyring_file -- keyring/file.sh@105 -- # jq length 00:34:01.127 09:45:13 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:34:01.127 09:45:13 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JpDpT3tDFm 00:34:01.127 09:45:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JpDpT3tDFm 00:34:01.385 09:45:13 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mkrt4ZaSbb 00:34:01.385 09:45:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mkrt4ZaSbb 00:34:01.644 09:45:13 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:01.644 09:45:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:01.902 nvme0n1 00:34:01.902 09:45:14 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:34:01.902 09:45:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:02.162 09:45:14 keyring_file -- keyring/file.sh@113 -- # config='{ 00:34:02.162 "subsystems": [ 00:34:02.162 { 00:34:02.162 "subsystem": "keyring", 00:34:02.162 "config": [ 00:34:02.162 { 00:34:02.162 "method": "keyring_file_add_key", 00:34:02.162 "params": { 00:34:02.162 "name": "key0", 00:34:02.162 "path": "/tmp/tmp.JpDpT3tDFm" 00:34:02.162 } 00:34:02.162 }, 00:34:02.162 { 00:34:02.162 "method": "keyring_file_add_key", 00:34:02.162 "params": { 00:34:02.162 "name": "key1", 00:34:02.162 "path": "/tmp/tmp.mkrt4ZaSbb" 00:34:02.162 } 00:34:02.162 } 00:34:02.162 ] 00:34:02.162 }, 00:34:02.162 { 00:34:02.162 "subsystem": "iobuf", 00:34:02.162 "config": [ 00:34:02.162 { 00:34:02.162 "method": "iobuf_set_options", 00:34:02.162 "params": { 00:34:02.162 "small_pool_count": 8192, 00:34:02.162 "large_pool_count": 1024, 00:34:02.162 "small_bufsize": 8192, 00:34:02.162 "large_bufsize": 135168, 00:34:02.162 "enable_numa": false 00:34:02.162 } 00:34:02.162 } 00:34:02.162 ] 00:34:02.162 }, 00:34:02.162 { 00:34:02.162 "subsystem": "sock", 00:34:02.162 "config": [ 00:34:02.162 { 00:34:02.162 "method": "sock_set_default_impl", 00:34:02.162 "params": { 00:34:02.162 "impl_name": "posix" 00:34:02.162 } 00:34:02.162 }, 00:34:02.162 { 00:34:02.162 "method": "sock_impl_set_options", 00:34:02.162 "params": { 00:34:02.162 "impl_name": "ssl", 00:34:02.162 "recv_buf_size": 4096, 00:34:02.162 "send_buf_size": 4096, 00:34:02.162 "enable_recv_pipe": true, 00:34:02.162 "enable_quickack": false, 00:34:02.162 "enable_placement_id": 0, 00:34:02.162 "enable_zerocopy_send_server": true, 00:34:02.162 "enable_zerocopy_send_client": false, 00:34:02.162 "zerocopy_threshold": 0, 00:34:02.162 "tls_version": 0, 00:34:02.162 "enable_ktls": false 00:34:02.162 } 00:34:02.162 }, 00:34:02.162 { 00:34:02.162 "method": "sock_impl_set_options", 00:34:02.162 "params": { 00:34:02.162 "impl_name": "posix", 00:34:02.162 "recv_buf_size": 2097152, 00:34:02.162 "send_buf_size": 2097152, 00:34:02.162 "enable_recv_pipe": true, 00:34:02.162 "enable_quickack": false, 00:34:02.162 "enable_placement_id": 0, 00:34:02.162 "enable_zerocopy_send_server": true, 00:34:02.162 "enable_zerocopy_send_client": false, 00:34:02.162 "zerocopy_threshold": 0, 00:34:02.162 "tls_version": 0, 00:34:02.162 "enable_ktls": false 00:34:02.162 } 00:34:02.162 } 00:34:02.162 ] 00:34:02.162 }, 00:34:02.162 { 00:34:02.162 "subsystem": "vmd", 00:34:02.162 "config": [] 00:34:02.162 }, 00:34:02.162 { 00:34:02.162 "subsystem": "accel", 00:34:02.162 "config": [ 00:34:02.162 { 00:34:02.162 "method": "accel_set_options", 00:34:02.162 "params": { 00:34:02.162 "small_cache_size": 128, 00:34:02.162 "large_cache_size": 16, 00:34:02.162 "task_count": 2048, 00:34:02.162 "sequence_count": 2048, 00:34:02.162 "buf_count": 2048 00:34:02.162 } 00:34:02.162 } 00:34:02.162 ] 00:34:02.162 }, 00:34:02.162 { 00:34:02.162 "subsystem": "bdev", 00:34:02.162 "config": [ 00:34:02.162 { 00:34:02.162 "method": "bdev_set_options", 00:34:02.162 "params": { 00:34:02.162 "bdev_io_pool_size": 65535, 00:34:02.162 "bdev_io_cache_size": 256, 00:34:02.162 "bdev_auto_examine": true, 00:34:02.162 "iobuf_small_cache_size": 128, 00:34:02.162 "iobuf_large_cache_size": 16 00:34:02.162 } 00:34:02.162 }, 00:34:02.162 { 00:34:02.162 "method": "bdev_raid_set_options", 00:34:02.162 "params": { 00:34:02.162 "process_window_size_kb": 1024, 00:34:02.162 "process_max_bandwidth_mb_sec": 0 00:34:02.162 } 00:34:02.162 }, 00:34:02.162 { 00:34:02.162 "method": "bdev_iscsi_set_options", 00:34:02.162 "params": { 00:34:02.162 "timeout_sec": 30 00:34:02.162 } 00:34:02.162 }, 00:34:02.162 { 00:34:02.162 "method": "bdev_nvme_set_options", 00:34:02.162 "params": { 00:34:02.162 "action_on_timeout": "none", 00:34:02.162 "timeout_us": 0, 00:34:02.162 "timeout_admin_us": 0, 00:34:02.162 "keep_alive_timeout_ms": 10000, 00:34:02.162 "arbitration_burst": 0, 00:34:02.162 "low_priority_weight": 0, 00:34:02.162 "medium_priority_weight": 0, 00:34:02.162 "high_priority_weight": 0, 00:34:02.162 "nvme_adminq_poll_period_us": 10000, 00:34:02.162 "nvme_ioq_poll_period_us": 0, 00:34:02.162 "io_queue_requests": 512, 00:34:02.162 "delay_cmd_submit": true, 00:34:02.162 "transport_retry_count": 4, 00:34:02.162 "bdev_retry_count": 3, 00:34:02.162 "transport_ack_timeout": 0, 00:34:02.162 "ctrlr_loss_timeout_sec": 0, 00:34:02.162 "reconnect_delay_sec": 0, 00:34:02.162 "fast_io_fail_timeout_sec": 0, 00:34:02.162 "disable_auto_failback": false, 00:34:02.162 "generate_uuids": false, 00:34:02.162 "transport_tos": 0, 00:34:02.162 "nvme_error_stat": false, 00:34:02.162 "rdma_srq_size": 0, 00:34:02.162 "io_path_stat": false, 00:34:02.162 "allow_accel_sequence": false, 00:34:02.162 "rdma_max_cq_size": 0, 00:34:02.162 "rdma_cm_event_timeout_ms": 0, 00:34:02.162 "dhchap_digests": [ 00:34:02.162 "sha256", 00:34:02.162 "sha384", 00:34:02.162 "sha512" 00:34:02.162 ], 00:34:02.162 "dhchap_dhgroups": [ 00:34:02.162 "null", 00:34:02.162 "ffdhe2048", 00:34:02.162 "ffdhe3072", 00:34:02.162 "ffdhe4096", 00:34:02.162 "ffdhe6144", 00:34:02.162 "ffdhe8192" 00:34:02.162 ] 00:34:02.162 } 00:34:02.162 }, 00:34:02.162 { 00:34:02.162 "method": "bdev_nvme_attach_controller", 00:34:02.162 "params": { 00:34:02.162 "name": "nvme0", 00:34:02.162 "trtype": "TCP", 00:34:02.162 "adrfam": "IPv4", 00:34:02.162 "traddr": "127.0.0.1", 00:34:02.162 "trsvcid": "4420", 00:34:02.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:02.162 "prchk_reftag": false, 00:34:02.162 "prchk_guard": false, 00:34:02.162 "ctrlr_loss_timeout_sec": 0, 00:34:02.162 "reconnect_delay_sec": 0, 00:34:02.162 "fast_io_fail_timeout_sec": 0, 00:34:02.162 "psk": "key0", 00:34:02.162 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:02.162 "hdgst": false, 00:34:02.162 "ddgst": false, 00:34:02.162 "multipath": "multipath" 00:34:02.162 } 00:34:02.163 }, 00:34:02.163 { 00:34:02.163 "method": "bdev_nvme_set_hotplug", 00:34:02.163 "params": { 00:34:02.163 "period_us": 100000, 00:34:02.163 "enable": false 00:34:02.163 } 00:34:02.163 }, 00:34:02.163 { 00:34:02.163 "method": "bdev_wait_for_examine" 00:34:02.163 } 00:34:02.163 ] 00:34:02.163 }, 00:34:02.163 { 00:34:02.163 "subsystem": "nbd", 00:34:02.163 "config": [] 00:34:02.163 } 00:34:02.163 ] 00:34:02.163 }' 00:34:02.163 09:45:14 keyring_file -- keyring/file.sh@115 -- # killprocess 3597828 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3597828 ']' 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3597828 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3597828 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3597828' 00:34:02.163 killing process with pid 3597828 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@973 -- # kill 3597828 00:34:02.163 Received shutdown signal, test time was about 1.000000 seconds 00:34:02.163 00:34:02.163 Latency(us) 00:34:02.163 [2024-12-13T08:45:14.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.163 [2024-12-13T08:45:14.529Z] =================================================================================================================== 00:34:02.163 [2024-12-13T08:45:14.529Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@978 -- # wait 3597828 00:34:02.163 09:45:14 keyring_file -- keyring/file.sh@118 -- # bperfpid=3599699 00:34:02.163 09:45:14 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3599699 /var/tmp/bperf.sock 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3599699 ']' 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:02.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:02.163 09:45:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:02.163 09:45:14 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:02.163 09:45:14 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:34:02.163 "subsystems": [ 00:34:02.163 { 00:34:02.163 "subsystem": "keyring", 00:34:02.163 "config": [ 00:34:02.163 { 00:34:02.163 "method": "keyring_file_add_key", 00:34:02.163 "params": { 00:34:02.163 "name": "key0", 00:34:02.163 "path": "/tmp/tmp.JpDpT3tDFm" 00:34:02.163 } 00:34:02.163 }, 00:34:02.163 { 00:34:02.163 "method": "keyring_file_add_key", 00:34:02.163 "params": { 00:34:02.163 "name": "key1", 00:34:02.163 "path": "/tmp/tmp.mkrt4ZaSbb" 00:34:02.163 } 00:34:02.163 } 00:34:02.163 ] 00:34:02.163 }, 00:34:02.163 { 00:34:02.163 "subsystem": "iobuf", 00:34:02.163 "config": [ 00:34:02.163 { 00:34:02.163 "method": "iobuf_set_options", 00:34:02.163 "params": { 00:34:02.163 "small_pool_count": 8192, 00:34:02.163 "large_pool_count": 1024, 00:34:02.163 "small_bufsize": 8192, 00:34:02.163 "large_bufsize": 135168, 00:34:02.163 "enable_numa": false 00:34:02.163 } 00:34:02.163 } 00:34:02.163 ] 00:34:02.163 }, 00:34:02.163 { 00:34:02.163 "subsystem": "sock", 00:34:02.163 "config": [ 00:34:02.163 { 00:34:02.163 "method": "sock_set_default_impl", 00:34:02.163 "params": { 00:34:02.163 "impl_name": "posix" 00:34:02.163 } 00:34:02.163 }, 00:34:02.163 { 00:34:02.163 "method": "sock_impl_set_options", 00:34:02.163 "params": { 00:34:02.163 "impl_name": "ssl", 00:34:02.163 "recv_buf_size": 4096, 00:34:02.163 "send_buf_size": 4096, 00:34:02.163 "enable_recv_pipe": true, 00:34:02.163 "enable_quickack": false, 00:34:02.163 "enable_placement_id": 0, 00:34:02.163 "enable_zerocopy_send_server": true, 00:34:02.163 "enable_zerocopy_send_client": false, 00:34:02.163 "zerocopy_threshold": 0, 00:34:02.163 "tls_version": 0, 00:34:02.163 "enable_ktls": false 00:34:02.163 } 00:34:02.163 }, 00:34:02.163 { 00:34:02.163 "method": "sock_impl_set_options", 00:34:02.163 "params": { 00:34:02.163 "impl_name": "posix", 00:34:02.163 "recv_buf_size": 2097152, 00:34:02.163 "send_buf_size": 2097152, 00:34:02.163 "enable_recv_pipe": true, 00:34:02.163 "enable_quickack": false, 00:34:02.163 "enable_placement_id": 0, 00:34:02.163 "enable_zerocopy_send_server": true, 00:34:02.163 "enable_zerocopy_send_client": false, 00:34:02.163 "zerocopy_threshold": 0, 00:34:02.163 "tls_version": 0, 00:34:02.163 "enable_ktls": false 00:34:02.163 } 00:34:02.163 } 00:34:02.163 ] 00:34:02.163 }, 00:34:02.163 { 00:34:02.163 "subsystem": "vmd", 00:34:02.163 "config": [] 00:34:02.163 }, 00:34:02.163 { 00:34:02.163 "subsystem": "accel", 00:34:02.163 "config": [ 00:34:02.163 { 00:34:02.163 "method": "accel_set_options", 00:34:02.163 "params": { 00:34:02.163 "small_cache_size": 128, 00:34:02.163 "large_cache_size": 16, 00:34:02.163 "task_count": 2048, 00:34:02.163 "sequence_count": 2048, 00:34:02.163 "buf_count": 2048 00:34:02.163 } 00:34:02.163 } 00:34:02.163 ] 00:34:02.163 }, 00:34:02.163 { 00:34:02.163 "subsystem": "bdev", 00:34:02.163 "config": [ 00:34:02.163 { 00:34:02.163 "method": "bdev_set_options", 00:34:02.163 "params": { 00:34:02.163 "bdev_io_pool_size": 65535, 00:34:02.163 "bdev_io_cache_size": 256, 00:34:02.163 "bdev_auto_examine": true, 00:34:02.163 "iobuf_small_cache_size": 128, 00:34:02.163 "iobuf_large_cache_size": 16 00:34:02.163 } 00:34:02.163 }, 00:34:02.163 { 00:34:02.163 "method": "bdev_raid_set_options", 00:34:02.163 "params": { 00:34:02.163 "process_window_size_kb": 1024, 00:34:02.163 "process_max_bandwidth_mb_sec": 0 00:34:02.163 } 00:34:02.163 }, 00:34:02.163 { 00:34:02.163 "method": "bdev_iscsi_set_options", 00:34:02.163 "params": { 00:34:02.163 "timeout_sec": 30 00:34:02.163 } 00:34:02.163 }, 00:34:02.163 { 00:34:02.163 "method": "bdev_nvme_set_options", 00:34:02.163 "params": { 00:34:02.163 "action_on_timeout": "none", 00:34:02.163 "timeout_us": 0, 00:34:02.163 "timeout_admin_us": 0, 00:34:02.163 "keep_alive_timeout_ms": 10000, 00:34:02.163 "arbitration_burst": 0, 00:34:02.163 "low_priority_weight": 0, 00:34:02.163 "medium_priority_weight": 0, 00:34:02.163 "high_priority_weight": 0, 00:34:02.163 "nvme_adminq_poll_period_us": 10000, 00:34:02.164 "nvme_ioq_poll_period_us": 0, 00:34:02.164 "io_queue_requests": 512, 00:34:02.164 "delay_cmd_submit": true, 00:34:02.164 "transport_retry_count": 4, 00:34:02.164 "bdev_retry_count": 3, 00:34:02.164 "transport_ack_timeout": 0, 00:34:02.164 "ctrlr_loss_timeout_sec": 0, 00:34:02.164 "reconnect_delay_sec": 0, 00:34:02.164 "fast_io_fail_timeout_sec": 0, 00:34:02.164 "disable_auto_failback": false, 00:34:02.164 "generate_uuids": false, 00:34:02.164 "transport_tos": 0, 00:34:02.164 "nvme_error_stat": false, 00:34:02.164 "rdma_srq_size": 0, 00:34:02.164 "io_path_stat": false, 00:34:02.164 "allow_accel_sequence": false, 00:34:02.164 "rdma_max_cq_size": 0, 00:34:02.164 "rdma_cm_event_timeout_ms": 0, 00:34:02.164 "dhchap_digests": [ 00:34:02.164 "sha256", 00:34:02.164 "sha384", 00:34:02.164 "sha512" 00:34:02.164 ], 00:34:02.164 "dhchap_dhgroups": [ 00:34:02.164 "null", 00:34:02.164 "ffdhe2048", 00:34:02.164 "ffdhe3072", 00:34:02.164 "ffdhe4096", 00:34:02.164 "ffdhe6144", 00:34:02.164 "ffdhe8192" 00:34:02.164 ] 00:34:02.164 } 00:34:02.164 }, 00:34:02.164 { 00:34:02.164 "method": "bdev_nvme_attach_controller", 00:34:02.164 "params": { 00:34:02.164 "name": "nvme0", 00:34:02.164 "trtype": "TCP", 00:34:02.164 "adrfam": "IPv4", 00:34:02.164 "traddr": "127.0.0.1", 00:34:02.164 "trsvcid": "4420", 00:34:02.164 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:02.164 "prchk_reftag": false, 00:34:02.164 "prchk_guard": false, 00:34:02.164 "ctrlr_loss_timeout_sec": 0, 00:34:02.164 "reconnect_delay_sec": 0, 00:34:02.164 "fast_io_fail_timeout_sec": 0, 00:34:02.164 "psk": "key0", 00:34:02.164 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:02.164 "hdgst": false, 00:34:02.164 "ddgst": false, 00:34:02.164 "multipath": "multipath" 00:34:02.164 } 00:34:02.164 }, 00:34:02.164 { 00:34:02.164 "method": "bdev_nvme_set_hotplug", 00:34:02.164 "params": { 00:34:02.164 "period_us": 100000, 00:34:02.164 "enable": false 00:34:02.164 } 00:34:02.164 }, 00:34:02.164 { 00:34:02.164 "method": "bdev_wait_for_examine" 00:34:02.164 } 00:34:02.164 ] 00:34:02.164 }, 00:34:02.164 { 00:34:02.164 "subsystem": "nbd", 00:34:02.164 "config": [] 00:34:02.164 } 00:34:02.164 ] 00:34:02.164 }' 00:34:02.164 [2024-12-13 09:45:14.525032] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:34:02.164 [2024-12-13 09:45:14.525079] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3599699 ] 00:34:02.422 [2024-12-13 09:45:14.586940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.422 [2024-12-13 09:45:14.625636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.423 [2024-12-13 09:45:14.786110] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:02.989 09:45:15 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:02.989 09:45:15 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:02.989 09:45:15 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:34:02.989 09:45:15 keyring_file -- keyring/file.sh@121 -- # jq length 00:34:02.989 09:45:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:03.247 09:45:15 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:03.247 09:45:15 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:34:03.247 09:45:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:03.247 09:45:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:03.247 09:45:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:03.247 09:45:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:03.247 09:45:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:03.505 09:45:15 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:34:03.505 09:45:15 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:34:03.505 09:45:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:03.505 09:45:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:03.505 09:45:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:03.505 09:45:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:03.505 09:45:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:03.764 09:45:15 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:34:03.764 09:45:15 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:34:03.764 09:45:15 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:34:03.764 09:45:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:03.764 09:45:16 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:34:03.764 09:45:16 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:03.764 09:45:16 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.JpDpT3tDFm /tmp/tmp.mkrt4ZaSbb 00:34:03.764 09:45:16 keyring_file -- keyring/file.sh@20 -- # killprocess 3599699 00:34:03.764 09:45:16 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3599699 ']' 00:34:03.764 09:45:16 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3599699 00:34:03.764 09:45:16 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:03.764 09:45:16 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:03.764 09:45:16 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3599699 00:34:04.023 09:45:16 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:04.023 09:45:16 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:04.023 09:45:16 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3599699' 00:34:04.023 killing process with pid 3599699 00:34:04.023 09:45:16 keyring_file -- common/autotest_common.sh@973 -- # kill 3599699 00:34:04.023 Received shutdown signal, test time was about 1.000000 seconds 00:34:04.023 00:34:04.023 Latency(us) 00:34:04.023 [2024-12-13T08:45:16.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.023 [2024-12-13T08:45:16.389Z] =================================================================================================================== 00:34:04.023 [2024-12-13T08:45:16.389Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:04.023 09:45:16 keyring_file -- common/autotest_common.sh@978 -- # wait 3599699 00:34:04.023 09:45:16 keyring_file -- keyring/file.sh@21 -- # killprocess 3597822 00:34:04.023 09:45:16 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3597822 ']' 00:34:04.023 09:45:16 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3597822 00:34:04.023 09:45:16 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:04.023 09:45:16 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.023 09:45:16 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3597822 00:34:04.023 09:45:16 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.023 09:45:16 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.023 09:45:16 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3597822' 00:34:04.023 killing process with pid 3597822 00:34:04.023 09:45:16 keyring_file -- common/autotest_common.sh@973 -- # kill 3597822 00:34:04.023 09:45:16 keyring_file -- common/autotest_common.sh@978 -- # wait 3597822 00:34:04.591 00:34:04.591 real 0m11.410s 00:34:04.591 user 0m28.275s 00:34:04.591 sys 0m2.611s 00:34:04.591 09:45:16 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.591 09:45:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:04.591 ************************************ 00:34:04.591 END TEST keyring_file 00:34:04.591 ************************************ 00:34:04.591 09:45:16 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:34:04.591 09:45:16 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:04.591 09:45:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:04.591 09:45:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:04.591 09:45:16 -- common/autotest_common.sh@10 -- # set +x 00:34:04.591 ************************************ 00:34:04.591 START TEST keyring_linux 00:34:04.591 ************************************ 00:34:04.591 09:45:16 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:04.591 Joined session keyring: 695342123 00:34:04.591 * Looking for test storage... 00:34:04.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:04.591 09:45:16 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:04.591 09:45:16 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:34:04.591 09:45:16 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:04.591 09:45:16 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@345 -- # : 1 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:04.591 09:45:16 keyring_linux -- scripts/common.sh@368 -- # return 0 00:34:04.591 09:45:16 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:04.591 09:45:16 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:04.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.591 --rc genhtml_branch_coverage=1 00:34:04.591 --rc genhtml_function_coverage=1 00:34:04.591 --rc genhtml_legend=1 00:34:04.591 --rc geninfo_all_blocks=1 00:34:04.591 --rc geninfo_unexecuted_blocks=1 00:34:04.591 00:34:04.591 ' 00:34:04.591 09:45:16 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:04.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.591 --rc genhtml_branch_coverage=1 00:34:04.591 --rc genhtml_function_coverage=1 00:34:04.591 --rc genhtml_legend=1 00:34:04.591 --rc geninfo_all_blocks=1 00:34:04.591 --rc geninfo_unexecuted_blocks=1 00:34:04.591 00:34:04.591 ' 00:34:04.591 09:45:16 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:04.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.591 --rc genhtml_branch_coverage=1 00:34:04.591 --rc genhtml_function_coverage=1 00:34:04.591 --rc genhtml_legend=1 00:34:04.591 --rc geninfo_all_blocks=1 00:34:04.591 --rc geninfo_unexecuted_blocks=1 00:34:04.591 00:34:04.591 ' 00:34:04.591 09:45:16 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:04.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.591 --rc genhtml_branch_coverage=1 00:34:04.591 --rc genhtml_function_coverage=1 00:34:04.591 --rc genhtml_legend=1 00:34:04.591 --rc geninfo_all_blocks=1 00:34:04.591 --rc geninfo_unexecuted_blocks=1 00:34:04.591 00:34:04.591 ' 00:34:04.591 09:45:16 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:04.591 09:45:16 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:04.591 09:45:16 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:34:04.591 09:45:16 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.591 09:45:16 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.591 09:45:16 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.591 09:45:16 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.591 09:45:16 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:04.591 09:45:16 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:04.591 09:45:16 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.591 09:45:16 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:04.591 09:45:16 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.591 09:45:16 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:04.591 09:45:16 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:04.592 09:45:16 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:34:04.592 09:45:16 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.592 09:45:16 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.592 09:45:16 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.592 09:45:16 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.592 09:45:16 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.592 09:45:16 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.592 09:45:16 keyring_linux -- paths/export.sh@5 -- # export PATH 00:34:04.592 09:45:16 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:04.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:04.592 09:45:16 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:04.592 09:45:16 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:04.592 09:45:16 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:04.592 09:45:16 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:34:04.592 09:45:16 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:34:04.592 09:45:16 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:34:04.592 09:45:16 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:34:04.592 09:45:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:04.592 09:45:16 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:34:04.592 09:45:16 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:04.592 09:45:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:04.592 09:45:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:34:04.592 09:45:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:04.592 09:45:16 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:04.851 09:45:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:34:04.851 09:45:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:34:04.851 /tmp/:spdk-test:key0 00:34:04.851 09:45:16 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:34:04.851 09:45:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:04.851 09:45:16 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:34:04.851 09:45:16 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:04.851 09:45:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:04.851 09:45:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:34:04.851 09:45:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:04.851 09:45:16 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:04.851 09:45:16 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:04.851 09:45:16 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:04.851 09:45:16 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:04.851 09:45:16 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:04.851 09:45:16 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:04.851 09:45:17 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:34:04.851 09:45:17 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:34:04.851 /tmp/:spdk-test:key1 00:34:04.851 09:45:17 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3600239 00:34:04.851 09:45:17 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3600239 00:34:04.851 09:45:17 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:04.851 09:45:17 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3600239 ']' 00:34:04.851 09:45:17 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.851 09:45:17 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:04.851 09:45:17 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.851 09:45:17 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:04.851 09:45:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:04.851 [2024-12-13 09:45:17.069230] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:34:04.851 [2024-12-13 09:45:17.069283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3600239 ] 00:34:04.851 [2024-12-13 09:45:17.130342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.851 [2024-12-13 09:45:17.172342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.110 09:45:17 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:05.110 09:45:17 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:05.110 09:45:17 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:34:05.110 09:45:17 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.110 09:45:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:05.110 [2024-12-13 09:45:17.395790] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:05.110 null0 00:34:05.110 [2024-12-13 09:45:17.427842] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:05.110 [2024-12-13 09:45:17.428126] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:05.110 09:45:17 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.110 09:45:17 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:34:05.110 246780027 00:34:05.110 09:45:17 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:34:05.110 1038558270 00:34:05.110 09:45:17 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3600251 00:34:05.110 09:45:17 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:34:05.110 09:45:17 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3600251 /var/tmp/bperf.sock 00:34:05.110 09:45:17 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3600251 ']' 00:34:05.110 09:45:17 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:05.110 09:45:17 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.110 09:45:17 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:05.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:05.110 09:45:17 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.110 09:45:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:05.369 [2024-12-13 09:45:17.499120] Starting SPDK v25.01-pre git sha1 575641720 / DPDK 24.03.0 initialization... 00:34:05.369 [2024-12-13 09:45:17.499163] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3600251 ] 00:34:05.369 [2024-12-13 09:45:17.561052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.369 [2024-12-13 09:45:17.599767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.369 09:45:17 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:05.369 09:45:17 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:05.369 09:45:17 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:34:05.369 09:45:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:34:05.628 09:45:17 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:34:05.628 09:45:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:05.886 09:45:18 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:05.886 09:45:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:05.886 [2024-12-13 09:45:18.241730] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:06.145 nvme0n1 00:34:06.145 09:45:18 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:34:06.145 09:45:18 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:34:06.145 09:45:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:06.145 09:45:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:06.145 09:45:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:06.145 09:45:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:06.404 09:45:18 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:34:06.404 09:45:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:06.404 09:45:18 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:34:06.404 09:45:18 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:34:06.404 09:45:18 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:34:06.404 09:45:18 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:06.404 09:45:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:06.404 09:45:18 keyring_linux -- keyring/linux.sh@25 -- # sn=246780027 00:34:06.404 09:45:18 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:34:06.404 09:45:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:06.404 09:45:18 keyring_linux -- keyring/linux.sh@26 -- # [[ 246780027 == \2\4\6\7\8\0\0\2\7 ]] 00:34:06.404 09:45:18 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 246780027 00:34:06.404 09:45:18 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:34:06.404 09:45:18 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:06.662 Running I/O for 1 seconds... 00:34:07.598 20728.00 IOPS, 80.97 MiB/s 00:34:07.598 Latency(us) 00:34:07.598 [2024-12-13T08:45:19.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.598 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:07.598 nvme0n1 : 1.01 20728.52 80.97 0.00 0.00 6153.85 2559.02 7864.32 00:34:07.598 [2024-12-13T08:45:19.964Z] =================================================================================================================== 00:34:07.598 [2024-12-13T08:45:19.964Z] Total : 20728.52 80.97 0.00 0.00 6153.85 2559.02 7864.32 00:34:07.598 { 00:34:07.598 "results": [ 00:34:07.598 { 00:34:07.598 "job": "nvme0n1", 00:34:07.598 "core_mask": "0x2", 00:34:07.598 "workload": "randread", 00:34:07.598 "status": "finished", 00:34:07.598 "queue_depth": 128, 00:34:07.598 "io_size": 4096, 00:34:07.598 "runtime": 1.00615, 00:34:07.598 "iops": 20728.51960443274, 00:34:07.598 "mibps": 80.97077970481539, 00:34:07.598 "io_failed": 0, 00:34:07.598 "io_timeout": 0, 00:34:07.598 "avg_latency_us": 6153.84665022741, 00:34:07.598 "min_latency_us": 2559.024761904762, 00:34:07.598 "max_latency_us": 7864.32 00:34:07.598 } 00:34:07.598 ], 00:34:07.598 "core_count": 1 00:34:07.598 } 00:34:07.598 09:45:19 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:07.598 09:45:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:07.856 09:45:20 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:34:07.856 09:45:20 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:34:07.856 09:45:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:07.856 09:45:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:07.856 09:45:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:07.856 09:45:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@23 -- # return 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:08.115 09:45:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:08.115 [2024-12-13 09:45:20.406582] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:08.115 [2024-12-13 09:45:20.407438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788220 (107): Transport endpoint is not connected 00:34:08.115 [2024-12-13 09:45:20.408433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788220 (9): Bad file descriptor 00:34:08.115 [2024-12-13 09:45:20.409434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:08.115 [2024-12-13 09:45:20.409444] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:08.115 [2024-12-13 09:45:20.409455] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:08.115 [2024-12-13 09:45:20.409467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:08.115 request: 00:34:08.115 { 00:34:08.115 "name": "nvme0", 00:34:08.115 "trtype": "tcp", 00:34:08.115 "traddr": "127.0.0.1", 00:34:08.115 "adrfam": "ipv4", 00:34:08.115 "trsvcid": "4420", 00:34:08.115 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:08.115 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:08.115 "prchk_reftag": false, 00:34:08.115 "prchk_guard": false, 00:34:08.115 "hdgst": false, 00:34:08.115 "ddgst": false, 00:34:08.115 "psk": ":spdk-test:key1", 00:34:08.115 "allow_unrecognized_csi": false, 00:34:08.115 "method": "bdev_nvme_attach_controller", 00:34:08.115 "req_id": 1 00:34:08.115 } 00:34:08.115 Got JSON-RPC error response 00:34:08.115 response: 00:34:08.115 { 00:34:08.115 "code": -5, 00:34:08.115 "message": "Input/output error" 00:34:08.115 } 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@33 -- # sn=246780027 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 246780027 00:34:08.115 1 links removed 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@33 -- # sn=1038558270 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1038558270 00:34:08.115 1 links removed 00:34:08.115 09:45:20 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3600251 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3600251 ']' 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3600251 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:08.115 09:45:20 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3600251 00:34:08.379 09:45:20 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:08.379 09:45:20 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:08.379 09:45:20 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3600251' 00:34:08.379 killing process with pid 3600251 00:34:08.379 09:45:20 keyring_linux -- common/autotest_common.sh@973 -- # kill 3600251 00:34:08.379 Received shutdown signal, test time was about 1.000000 seconds 00:34:08.379 00:34:08.379 Latency(us) 00:34:08.379 [2024-12-13T08:45:20.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:08.379 [2024-12-13T08:45:20.745Z] =================================================================================================================== 00:34:08.379 [2024-12-13T08:45:20.745Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:08.379 09:45:20 keyring_linux -- common/autotest_common.sh@978 -- # wait 3600251 00:34:08.379 09:45:20 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3600239 00:34:08.379 09:45:20 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3600239 ']' 00:34:08.379 09:45:20 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3600239 00:34:08.379 09:45:20 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:08.379 09:45:20 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:08.379 09:45:20 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3600239 00:34:08.379 09:45:20 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:08.379 09:45:20 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:08.379 09:45:20 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3600239' 00:34:08.379 killing process with pid 3600239 00:34:08.379 09:45:20 keyring_linux -- common/autotest_common.sh@973 -- # kill 3600239 00:34:08.379 09:45:20 keyring_linux -- common/autotest_common.sh@978 -- # wait 3600239 00:34:08.640 00:34:08.640 real 0m4.262s 00:34:08.640 user 0m7.901s 00:34:08.640 sys 0m1.428s 00:34:08.640 09:45:20 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:08.640 09:45:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:08.640 ************************************ 00:34:08.640 END TEST keyring_linux 00:34:08.640 ************************************ 00:34:08.898 09:45:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:08.898 09:45:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:08.898 09:45:21 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:08.898 09:45:21 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:08.898 09:45:21 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:08.898 09:45:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:08.898 09:45:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:08.898 09:45:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:08.898 09:45:21 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:08.898 09:45:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:08.898 09:45:21 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:08.898 09:45:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:08.898 09:45:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:08.898 09:45:21 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:08.898 09:45:21 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:08.898 09:45:21 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:08.898 09:45:21 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:08.898 09:45:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:08.898 09:45:21 -- common/autotest_common.sh@10 -- # set +x 00:34:08.899 09:45:21 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:08.899 09:45:21 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:08.899 09:45:21 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:08.899 09:45:21 -- common/autotest_common.sh@10 -- # set +x 00:34:14.174 INFO: APP EXITING 00:34:14.174 INFO: killing all VMs 00:34:14.174 INFO: killing vhost app 00:34:14.174 INFO: EXIT DONE 00:34:16.080 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:34:16.080 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:34:16.080 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:34:16.080 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:34:16.080 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:34:16.080 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:34:16.080 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:34:16.080 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:34:16.080 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:34:16.080 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:34:16.080 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:34:16.080 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:34:16.080 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:34:16.081 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:34:16.339 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:34:16.339 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:34:16.339 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:34:18.874 Cleaning 00:34:18.874 Removing: /var/run/dpdk/spdk0/config 00:34:18.874 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:18.874 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:18.874 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:18.874 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:18.874 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:18.874 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:18.874 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:18.874 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:18.874 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:18.874 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:18.874 Removing: /var/run/dpdk/spdk1/config 00:34:18.874 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:18.874 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:18.874 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:18.874 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:18.874 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:18.874 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:18.874 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:18.874 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:18.874 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:18.874 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:18.874 Removing: /var/run/dpdk/spdk2/config 00:34:18.874 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:18.874 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:18.874 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:18.874 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:18.874 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:18.874 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:18.874 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:18.874 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:18.874 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:18.874 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:18.874 Removing: /var/run/dpdk/spdk3/config 00:34:18.874 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:18.874 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:18.874 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:18.874 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:18.874 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:18.874 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:18.874 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:18.874 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:18.874 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:18.874 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:18.874 Removing: /var/run/dpdk/spdk4/config 00:34:18.874 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:18.874 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:18.874 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:18.874 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:18.874 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:18.874 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:18.874 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:18.874 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:18.874 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:18.874 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:18.874 Removing: /dev/shm/bdev_svc_trace.1 00:34:18.874 Removing: /dev/shm/nvmf_trace.0 00:34:18.874 Removing: /dev/shm/spdk_tgt_trace.pid3131138 00:34:18.874 Removing: /var/run/dpdk/spdk0 00:34:18.874 Removing: /var/run/dpdk/spdk1 00:34:18.874 Removing: /var/run/dpdk/spdk2 00:34:18.874 Removing: /var/run/dpdk/spdk3 00:34:18.874 Removing: /var/run/dpdk/spdk4 00:34:18.874 Removing: /var/run/dpdk/spdk_pid3129051 00:34:18.874 Removing: /var/run/dpdk/spdk_pid3130082 00:34:18.874 Removing: /var/run/dpdk/spdk_pid3131138 00:34:18.874 Removing: /var/run/dpdk/spdk_pid3131761 00:34:18.874 Removing: /var/run/dpdk/spdk_pid3132683 00:34:18.874 Removing: /var/run/dpdk/spdk_pid3132814 00:34:18.874 Removing: /var/run/dpdk/spdk_pid3133859 00:34:18.874 Removing: /var/run/dpdk/spdk_pid3133875 00:34:18.874 Removing: /var/run/dpdk/spdk_pid3134222 00:34:18.874 Removing: /var/run/dpdk/spdk_pid3135697 00:34:18.874 Removing: /var/run/dpdk/spdk_pid3136974 00:34:18.874 Removing: /var/run/dpdk/spdk_pid3137546 00:34:18.874 Removing: /var/run/dpdk/spdk_pid3137768 00:34:18.874 Removing: /var/run/dpdk/spdk_pid3137952 00:34:19.133 Removing: /var/run/dpdk/spdk_pid3138226 00:34:19.133 Removing: /var/run/dpdk/spdk_pid3138471 00:34:19.133 Removing: /var/run/dpdk/spdk_pid3138711 00:34:19.133 Removing: /var/run/dpdk/spdk_pid3139266 00:34:19.133 Removing: /var/run/dpdk/spdk_pid3140112 00:34:19.133 Removing: /var/run/dpdk/spdk_pid3143230 00:34:19.133 Removing: /var/run/dpdk/spdk_pid3143406 00:34:19.133 Removing: /var/run/dpdk/spdk_pid3143542 00:34:19.133 Removing: /var/run/dpdk/spdk_pid3143758 00:34:19.133 Removing: /var/run/dpdk/spdk_pid3144101 00:34:19.133 Removing: /var/run/dpdk/spdk_pid3144242 00:34:19.133 Removing: /var/run/dpdk/spdk_pid3144626 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3144728 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3144989 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3144998 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3145248 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3145257 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3145810 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3146053 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3146340 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3150001 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3154373 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3164165 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3164841 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3169024 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3169288 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3173479 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3179240 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3181928 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3192473 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3201223 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3202826 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3203797 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3220253 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3224255 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3269278 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3274539 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3280192 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3286475 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3286566 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3287750 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3288636 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3289525 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3290089 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3290196 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3290423 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3290436 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3290503 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3291341 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3292227 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3293116 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3293788 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3293791 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3294019 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3295182 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3296177 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3304074 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3332494 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3336915 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3338684 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3340471 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3340494 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3340717 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3340818 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3341241 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3343012 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3343886 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3344246 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3346500 00:34:19.134 Removing: /var/run/dpdk/spdk_pid3346975 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3347479 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3351653 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3356927 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3356928 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3356929 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3360765 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3369555 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3373682 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3379712 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3380877 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3382218 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3383666 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3388273 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3392366 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3396276 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3403655 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3403745 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3408160 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3408399 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3408605 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3409049 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3409074 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3413916 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3414522 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3418800 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3421464 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3426734 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3431774 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3440330 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3447404 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3447413 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3466261 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3466776 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3467244 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3467913 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3468544 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3469098 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3469559 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3470024 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3474189 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3474417 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3480369 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3480640 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3486006 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3489947 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3499662 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3500138 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3504382 00:34:19.393 Removing: /var/run/dpdk/spdk_pid3504664 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3509215 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3514727 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3517254 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3526989 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3535584 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3537265 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3538158 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3553874 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3558113 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3560848 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3568239 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3568315 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3573348 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3575265 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3577184 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3578210 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3580333 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3581384 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3589816 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3590376 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3590822 00:34:19.394 Removing: /var/run/dpdk/spdk_pid3593036 00:34:19.652 Removing: /var/run/dpdk/spdk_pid3593489 00:34:19.652 Removing: /var/run/dpdk/spdk_pid3593941 00:34:19.653 Removing: /var/run/dpdk/spdk_pid3597822 00:34:19.653 Removing: /var/run/dpdk/spdk_pid3597828 00:34:19.653 Removing: /var/run/dpdk/spdk_pid3599699 00:34:19.653 Removing: /var/run/dpdk/spdk_pid3600239 00:34:19.653 Removing: /var/run/dpdk/spdk_pid3600251 00:34:19.653 Clean 00:34:19.653 09:45:31 -- common/autotest_common.sh@1453 -- # return 0 00:34:19.653 09:45:31 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:19.653 09:45:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:19.653 09:45:31 -- common/autotest_common.sh@10 -- # set +x 00:34:19.653 09:45:31 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:19.653 09:45:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:19.653 09:45:31 -- common/autotest_common.sh@10 -- # set +x 00:34:19.653 09:45:31 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:19.653 09:45:31 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:19.653 09:45:31 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:19.653 09:45:31 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:19.653 09:45:31 -- spdk/autotest.sh@398 -- # hostname 00:34:19.653 09:45:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:19.911 geninfo: WARNING: invalid characters removed from testname! 00:34:41.830 09:45:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:43.204 09:45:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:45.103 09:45:57 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:47.002 09:45:58 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:48.903 09:46:00 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:50.803 09:46:02 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:52.705 09:46:04 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:52.705 09:46:04 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:52.705 09:46:04 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:34:52.705 09:46:04 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:52.705 09:46:04 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:52.705 09:46:04 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:52.705 + [[ -n 3052620 ]] 00:34:52.705 + sudo kill 3052620 00:34:52.716 [Pipeline] } 00:34:52.731 [Pipeline] // stage 00:34:52.736 [Pipeline] } 00:34:52.750 [Pipeline] // timeout 00:34:52.755 [Pipeline] } 00:34:52.769 [Pipeline] // catchError 00:34:52.774 [Pipeline] } 00:34:52.788 [Pipeline] // wrap 00:34:52.794 [Pipeline] } 00:34:52.806 [Pipeline] // catchError 00:34:52.815 [Pipeline] stage 00:34:52.817 [Pipeline] { (Epilogue) 00:34:52.830 [Pipeline] catchError 00:34:52.831 [Pipeline] { 00:34:52.843 [Pipeline] echo 00:34:52.845 Cleanup processes 00:34:52.850 [Pipeline] sh 00:34:53.174 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:53.174 3610702 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:53.248 [Pipeline] sh 00:34:53.557 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:53.557 ++ grep -v 'sudo pgrep' 00:34:53.557 ++ awk '{print $1}' 00:34:53.557 + sudo kill -9 00:34:53.557 + true 00:34:53.569 [Pipeline] sh 00:34:53.853 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:06.071 [Pipeline] sh 00:35:06.356 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:06.356 Artifacts sizes are good 00:35:06.370 [Pipeline] archiveArtifacts 00:35:06.378 Archiving artifacts 00:35:06.499 [Pipeline] sh 00:35:06.785 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:06.799 [Pipeline] cleanWs 00:35:06.809 [WS-CLEANUP] Deleting project workspace... 00:35:06.809 [WS-CLEANUP] Deferred wipeout is used... 00:35:06.815 [WS-CLEANUP] done 00:35:06.817 [Pipeline] } 00:35:06.832 [Pipeline] // catchError 00:35:06.842 [Pipeline] sh 00:35:07.123 + logger -p user.info -t JENKINS-CI 00:35:07.131 [Pipeline] } 00:35:07.144 [Pipeline] // stage 00:35:07.148 [Pipeline] } 00:35:07.160 [Pipeline] // node 00:35:07.163 [Pipeline] End of Pipeline 00:35:07.197 Finished: SUCCESS